Skip to main content
MedEdPublish logoLink to MedEdPublish
. 2025 Dec 17;15:32. Originally published 2025 Jul 11. [Version 2] doi: 10.12688/mep.20799.2

Self-reported impacts one year after a brief health equity/implicit bias course for academic clinicians

Janice Sabin 1,a, Grace Guenther 1, Kris Piu Kwan Ma 1, Bernadette York 1, Wendy Barrington 2, Bianca Frogner 1
PMCID: PMC12759272  PMID: 41488413

Version Changes

Revised. Amendments from Version 1

This article underwent a major revision based upon reviewer comments. We added information about implicit and explicit bias scores from original Table 2 into Table 1. We compared participants who did/did not respond to Year 2 study and reported the differences between the two groups in the text and in a Table that can be found in the data repository https://doi.org/10.17605/OSF.IO/PGS8Q 30. We changed our analysis of stages of change and implicit and explicit bias from a Pearson Product Moment correlation analysis to a one-way between group analysis of variance (ANOVA) to better understand differences in IAT scores, participant stages of change and impact of course. We removed the correlation Table 5 and added this information in new Tables 4 and Table 5. We reported on implicit and explicit measures using standard IAT D scoring, reporting Mean, Standard Deviation, and tests of difference from zero. We describe interpretation of the strength of these measures using standard Project Implicit cut offs [https://implicit.harvard.edu/implicit/]. We expanded our discussion to include additional limitations. We clarified that stages of change were self-reported measures rather than direct objective observation and included this in the limitations section. Based on reviewer comments we analyzed responses to the four reflective questions to identify the number of unique participants who responded, yes, the course had an impact on teaching and/or practice to least one question.  This analysis showed that 75.0% of individual participants reported that the course had an impact on teaching or practice to at least one question. In addition, we reported “instances” of self-reported increased awareness (n=35) and instances of  actions taken (n=47). We clarified and properly cited the IBRM work that we referenced. We responded to reviewer comments point by point.

Abstract

Purpose

The purpose of this study was to explore whether there were lasting effects of brief implicit bias education on clinicians’ teaching and practice one year after taking the course and whether implicit and explicit bias was associated with self-reported impact of the course.

Method

This was a mixed-method study. We followed up with a sample of 119 academic clinicians who completed the baseline study December 2019. Recruitment for the current study was conducted between December 2020 and March 2021. Participants responded online to survey questions about whether the course had an impact on their teaching and practice. We categorized qualitative responses to these questions using Prochaska & DiClemente's Stages of Change Model of Behavior Change. Implicit and explicit race and gender bias data were collected at baseline.

Results

Response rate was 47.1% (N=56). Participants were 64.3% female, 66.1% White, 67.9% were Medical Doctors (MD) and 82.1% work in an academic healthcare system. Overall, we found slight implicit pro-White bias (mean= 0.27, SD 0.45, p=<0.001) and male-career gender bias (mean= 0.33, SD 0.31, p=<0.001). Across all four questions 42 unique participants (75.0%) responded to at least one question reporting, “ yes”, the course had an impact on their teaching/mentoring and or practice. Thirty-five (62.5%) participants reported that the course had an impact on their teaching and 23 (41.4%) reported an impact on their clinical practice. Participants reported 35 instances of increased bias awareness and 47 instances of actions taken due to the course. Those who reported no impact of the course on teaching held no implicit race bias, while those who reported actions taken held moderate implicit Pro-White bias.

Conclusions

This study found that the majority of study participants reported lasting effects of the course on their teaching and/or practice. Brief implicit bias education can impact clinicians’ teaching and practice. 

Keywords: implicit bias education, lasting effects, implicit bias, explicit bias

Introduction

Healthcare professionals’ implicit bias is one of many factors contributing to disparities in healthcare and health outcomes 1, 2 . Implicit bias is defined as “attitudes or stereotypes that affect our understanding, decision-making and behavior without our even realizing it” 3 . Similar to the general population, implicit bias exists among health professionals in the areas of race, gender, ethnicity, sexual orientation, age, weight, mental illness, and other areas 1, 46 . Healthcare professionals’ implicit bias contributes to poor clinician-patient communication, disparities in pain management, prescribing lipid-lowering medication for women, treatment of coronary heart disease, and other areas 1, 7, 8 .

Implicit bias education for students, trainees and clinicians in practice has been integrated into some medical school curricula and healthcare system clinician trainings, but not all. There is no clear evidence of optimal implicit bias curricula content, or educational evaluation strategies 9 and no irrefutable evidence that implicit bias education impacts teaching and practice. The focus of most implicit bias education is not to eliminate the bias that is by its nature ubiquitous, hidden, and has proven to persist in the long term 10 , but rather, to recognize how bias manifests and manage the impact of implicit attitudes and beliefs in teaching, practice and healthcare delivery 11, 12 . At the University of Washington, authors of this study found that academic clinical faculty needed additional foundational health equity/implicit bias education to feel competent in teaching about implicit bias in healthcare. In 2017, we developed a brief, online course for academic clinicians titled, Implicit Bias in the Clinical and Learning Environment, to meet this need.

Evaluation of health equity/implicit bias education for clinical faculty who teach is in its early stages. Little is known about lasting effects of implicit bias education on clinicians’ behavior. This study was exploratory in nature. We did not know how implicit and explicit biases would affect reports of the impact of the course on teaching and practice in a one year follow up. Studies, including the baseline study from which we drew our sample 13 , have found that brief online implicit bias education can increase bias awareness and intentions to change behavior 1315 . In this study, we returned to a sample of primary care clinical faculty one year after they took a brief online course and used the method of personal reflection for participants to report on the impact of the course in their teaching and clinical practice over the past year. The aim of this study was to explore whether and how the course had an impact on clinicians’ teaching and practice during the one year following taking the course. Our research questions explored: 1. Does health equity/implicit bias education have clinician self-reported lasting effects on teaching and practice, and if so, how? and 2. Is clinician implicit and explicit bias associated with self-reported lasting effects of health equity/implicit bias education?

Methods

Study design and sample

This mixed-method exploratory study returned to a sample of academic primary care clinicians who initially completed a survey and online health equity/implicit bias education, Implicit Bias in the Clinical and Learning Environment 13 , between September 2019 and December 2019, which we refer to as our baseline study, course publicly available at [ https://depts.washington.edu/somalt/implicitbias-pi/story.html]. One year later, participants who completed the baseline study were invited to participate in the current follow up study to evaluate lasting effects of the course. The follow up study was conducted between December 2020 and March 2021. Our baseline sample consisted of 119 U.S. academic family, internal, and emergency medicine providers, nurse practitioners, and physician assistants recruited from all nine U.S. Census Divisions 13 . Demographic and background information was collected in the baseline study, including information on personal and professional characteristics, and implicit and explicit race and gender bias measures. In the follow up survey, we updated participants’ work position and asked four reflection questions about if and how the course impacted their teaching and clinical practice over the past year. The University of Washington Human Subjects Institutional Review Board approved the study as minimal risk [Baseline study approval IRB # 00006978, Modification for follow up IRB #00008382, approved 11/13/2020].

Implicit Bias Education: Participants engaged in brief online health equity/implicit bias education as part of the baseline study. The course was developed by a team (including author JS), with expertise in medical education, adult teaching and learning theory, social determinants of health, implicit bias in healthcare, medical school clinician-administrators and academic clinicians who practice across a wide range of settings to serve as foundational information for academic clinicians who teach. The course was designed to be brief (35–40 minutes) so that it would not over burden busy clinicians. The course had three learning objectives: 1) define implicit bias and how it is manifested in health care, 2) recognize how implicit bias may be operating in the clinical setting and learning environment, and 3) apply strategies that can be used to minimize impact of implicit bias. Although the course was developed prior to publication of the Gonzalez et al., (2021) 11 framework of operationalizing implicit bias recognition and management (IBRM) education and the Sukhera and Watling (2018) 16 , operationalization of a theoretical framework for implicit bias education, the course incorporated many of the features of these models such as creating a safe environment, content on the science of implicit bias, normalizing bias, evidence of bias in the learning environment and practice, and increasing awareness of implicit bias 16 . Course content included: the history of racism in medicine, information about the social determinants of health, evidence of discrimination in healthcare, the science of implicit bias, and evidence about how implicit bias manifests in clinical care and the learning environment. Although skills practice was not part of the course, participants were provided with actionable strategies to mitigate the impact of bias in teaching and practice. Upon entering this follow up study, participants had the opportunity to revisit the course.

Implicit Bias Measures: In the baseline study we measured participant implicit race and gender bias using the standard Race Implicit Association Test (IAT) and Gender-Career IATs [available at: https://implicit.harvard.edu/implicit/] designed by scientists at Project Implicit using best practices for IAT design 17, 18 . The IAT is a widely used, computer-based test of implicit social cognition that measures the relative strength between positive and negative associations toward one social group compared with another 14 . The Black-White Race IAT asks test takers to sort and pair facial images of the target concept of race (faces of Black People and faces of White People) and words that represent “good” (e.g., glorious) or “bad” (e.g., yucky) as they appear on a computer screen. The Gender-Career IAT measures gender stereotypes using the target concepts of “male” (represented by traditionally male and female names, (e.g., Ben) versus “female” (e.g., Rebecca) and the concept of “career” represented by words associated with career (e.g., office) versus “family” represented by words associated with family (e.g., home). The difference in time taken to sort and pair these images and words as they rapidly appear on computer screen, measured in milliseconds, demonstrates the strength of their automatic association 19 . The IAT and explicit measures were used as a one-time, baseline measure of bias to characterize the sample and was not used as an IAT with feedback intervention to increase awareness of participants’ personal bias. To explore the topic further, participants were given the Project Implicit web address [ https://implicit.harvard.edu/implicit/] which provides the public with an opportunity to take IATs with personal feedback. Implicit measures are a continuous measure ranging from -2.0, +2.0 with “0” signifying no bias.

Explicit Bias Measures: At baseline, we used standard explicit measures designed using best practices 17 that correspond with the Race IAT [available at: https://implicit.harvard.edu/implicit/] to measure respondents’ preference for Black People versus White People on a 7-point preference response scale 20 . The explicit race measure question was: Which statement best describes you? 1. I strongly prefer Black People compared to White People; 2. I moderately prefer Black People compared to White People; 3. I slightly prefer Black People compared to White People; 4. I like Black People and White People equally; 5. I slightly prefer White People compared to Black People; 6. I moderately prefer White People compared to Black People; and 7. I strongly prefer White People compared to Black People. For the Gender IAT, we used two separate explicit measures: one that asked about association of gender (male versus female) with career only, and one that asked about association of gender with “career” versus “family” only, using the same format and 7-point scale. Explicit measures were analyzed on a scale that ranges from -3.0, +3.0, with “0” signifying no bias.

Open-ended, Reflective Questions: Participants were asked four yes/no reflective questions and to provide a typed response to the following prompts: 1. Reflecting on this course, has the content of the course impacted your teaching and/or mentoring? If yes, how? 2. Has the content of the course impacted your teaching and/or mentoring due to the COVID-19 pandemic, the current social justice and equality movement, or the current healthcare policy debate? If yes, how? 3. Reflecting on this course, has the content of the course impacted your clinical practice? If yes, how? and 4. Has the content of the course impacted your clinical practice due to the COVID-19 pandemic, the current social justice and equality movement, or the current healthcare policy debate? If yes, how? Supplement Table, Data repository [ : https://doi.org/10.17605/OSF.IO/PGS8Q 21 ]

Qualitative Analysis: We utilized a rapid practical thematic analysis approach for analysis of responses to the four reflective questions that aligns with the guide developed by Braun & Clarke 22 , We used the Prochaska & DiClemente Stages of Change Model of Behavior Change guide 23 which has previously been used to categorize qualitative data from bias interventions, as our analytical framework 24 . The stages of change categories are; Precontemplation (no intention to change), Contemplation (awareness of a problem,), Determination (preparing to change), Action (changed behavior), and Maintenance (maintaining new behavior) 23 . Study team members (JS, GG, KM, WB, BWY, BF) independently coded a subset of reflective question responses according to the Stages of Change model. The study team then collaboratively reviewed and finalized codes through an iterative process. Once codes were finalized, one team member (GG) applied the codes to the remaining questions and responses. All team members collectively defined and finalized themes and sub-themes within each stage of the model. We analyzed responses using Microsoft Excel, with response by participant in separate cells and separate sheets. Stages of change were coded ordinally, with precontemplation scoring a 0, contemplation a 1, determination a 2, action a 3, and maintenance a 4.

Quantitative Analysis: Quantitative analysis consisted of descriptive statistics to characterize the sample and implicit and explicit bias scores and one-way between group analysis of variance (ANOVA) to explore differences in IAT scores, participant stages of change and impact of course. The IAT D score mean was derived using the standard IAT scoring algorithm ( 19, Greenwald, 2003). Quantitative analysis was conducted using Stata (StataCorp. 2019. Stata Statistical Software: Release 16. College Station, TX: StataCorp LLC.) 25 and IBM SPSS Statistics for Macintosh (version 26.0, Armonk, NY: IBM Corp) 26 . [see software availability statement]

Results

Sample

Participants

Participants were 64.3% female, 66.1.0% White, 67.9% were Medical Doctors (MD), and 82.1% worked in an academic healthcare system ( Table 1). We compared those who responded to those who did not and found a few differences in characteristics of participants did and did not participate in the Y2 follow up study. Follow up study sample participants reported significantly less years in practice (16.0 years vs. 21.2 p=0.02), more worked in an academic healthcare system (82.1% vs. 63.2%%, p=0.04), and held stronger explicit gender bias (female/family mean= -1.07 vs. mean= -0.63, p=0.02). Supplement Table, Data repository [ : https://doi.org/10.17605/OSF.IO/PGS8Q 21 ]

Table 1. Study Sample Characteristics (N=56).

N (%)
Gender
Male 20 (35.7)
Female 36 (64.3)
Age
30–39 23 (41.1)
40–49 17 (30.4)
50–59 8 (14.3)
60+ 7 (12.5)
Prefer not to Answer 1 (1.8)
Ethnicity
Hispanic/Latino 4 (7.1)
Not Hispanic/Latino 52 (92.9)
Race
American Indian/Alaska Native 0 (0.0)
Asian 9 (16.1)
Black/African American 7 (12.5)
Native Hawaiian or Other Pacific Islander 0 (0.00)
White 37 (66.1)
Two or more races 1 (1.8)
Other 2 (3.57)
Provider Type
Medical Doctor (MD) 38 (67.9)
Nurse Practitioner (NP)/Physician Assistant (PA) 9 (16.1)
Other 9 (16.1)
Healthcare System
Academic Healthcare System 46 (82.1)
Community Healthcare System 9 (16.1)
Other 1 (1.8)
Census Region
Northeast 9 (16.1)
Midwest 14 (25.0)
South 22 (39.3)
West 11 (19.6)
Professional Experience Mean (SD)
Years in Clinical Practice
Mean (SD)
16.0 (11.2)
Years in Current Position
Mean (SD)
7.9 (8.3)
# Patients/Week Direct Clinical Contact
Mean (SD)
21.4 (10.6)
Implicit and Explicit Scores Mean (SD) a | P-value b
Implicit Gender (IAT) 0.33 (0.31) | 0.000
Implicit Race (IAT) 0.27 (0.45) | 0.000
Explicit Race 0.11 (0.76) | 0.524
Explicit Gender – Career 0.76 (1.07) | 0.000
Explicit Gender – Family -1.07 (0.95) | 0.000

a. For implicit and explicit measures, a positive score favors White, male with career, and female with family. A negative score favors Black, female with family. Interpretation of implicit and explicit measures: >-0.15 and > 0.15- little to no preference, > ±0.15 and 0.35 -slight preference, >±0.35 and 0.65 - moderate preference, > ±0.65 - strong preference

b. T-test to determine whether mean differs from “0”

Implicit and explicit measures

Overall, our sample held slight implicit pro-White bias (mean= 0.27, SD 0.45), and slight implicit gender bias associating males rather than females with the concept of “career” versus “family” (mean= 0.33, SD 0.31), ( Table 1). On explicit measures, we did not find evidence of explicit race bias (mean=0.11, SD 0.76). We found strong self-reported explicit bias associating “male” with “career” versus “family” (mean=0.76, SD 1.07), and female with “family” versus “career” (mean= -1.07, SD 0.95).

Overall impact of the course

Across all four questions, 42 (75.0%) unique participants reported “yes”, the course had an impact on their teaching/mentoring and/or practice to at least one question, representing 35.3% of the original sample. Table 2. One year after taking the course, 35 participants (62.5%) reported that the content of the course had an impact on their teaching and 23 participants (41.4%) reported that the course had an impact on their clinical practice. ( Table 2). Those assigned to the Maintenance stage (n=4) reported that they were already engaged in strategies to manage implicit bias in their setting. Participants who responded “no” to a question were not assigned a stage of change (SOC). For the two questions about teaching and practice relative to the COVID-19 pandemic and the current social justice and equality movements, 26 participants (46.4%) reported that the course impacted their teaching and 16 participants (28.6%) reported that the course impacted their practice. Across all four questions, participants reported 35 instances of increased awareness of bias (Contemplation) and 47 self-reported instances of actions taken that they attributed to the content of the course.

Table 2. Impact of Course on Teaching and Practice and Stages of Change (N=56).

N (%)
“Yes” to at least 1 question: the number of unique individuals who responded, Yes, the content of the course impacted my teaching/mentoring and/or practice. 42 (75.0%)
1. Reflecting on this course, has the content of the course impacted your teaching and/or mentoring? If yes, How?
No 21 (37.5)
Yes 35 (62.5)
If yes, Stages of Change
Contemplation 12 (34.3)
Determination 2 (5.7)
Action 17 (48.6)
Maintenance 4 (11.4)
2. Has the content of the course impacted your teaching and/or mentoring due to the COVID-19 pandemic, the current social justice and equality movement, or the current healthcare policy debate? If yes, How?
No 30 (53.6)
Yes 26 (46.4)
If yes, Stages of Change
Contemplation 8 (30.8)
Determination 2 (7.7)
Action 13 (50.0)
Maintenance 3 (11.5)
3. Reflecting on this course, has the content of the course impacted your clinical practice? If yes, How?
No 33 (58.9)
Yes 23 (41.1)
If yes, Stages of Change
Contemplation 7 (30.4)
Determination 2 (8.7)
Action 11 (47.8)
Maintenance 3 (13.0)
4. Has the content of the course impacted your clinical practice due to the COVID-19 pandemic, the current social justice and equality movement, or the current healthcare policy debate? If yes, how?
No 40 (71.4)
Yes 16 (28.6)
If yes, Stages of Change
Contemplation 8 (50.0)
Determination 1 (6.3)
Action 6 (37.5)
Maintenance 1 (6.3)

Contemplation stage of change

The Contemplation stage of change is characterized by awareness of a problem 23 . Examples of responses about the impact of the course on teaching that were assigned to the Contemplation stage of change are: “It has made me more aware of things I say, do and include in my courses and classrooms”, “Yes, I do think about bias all the time”, “Enhanced awareness” and “I have thought about it with respect to teaching, but am unsure how to make it actionable.” Examples of responses about the impact of the course on practice that were assigned to the Contemplation stage of change are: “try to be more aware of how cultural differences impact care decisions”, “yes, more aware of my implicit and explicit biases”, “I am more aware of how institutional racism might have impacted the patient's experience with the healthcare system”, and “As I am working with diverse patients I am aware of contributing to a culture of inclusion and respect for all backgrounds.”

Action stage of change

We identified seven types of actions that participants implemented in teaching and provided illustrative quotes in Table 3.: 1) include implicit bias education in curriculum, such as adding curriculum content related to micro-and macro-aggressions; 2) initiate discussions about implicit bias (participants reported feeling more comfortable to discuss biases with students with the information offered by the course); 3) teach students the impact of implicit bias on health disparities; 4) more awareness of their own biases while engaging in mentoring trainees; 5) intentionally elevate the voices of individuals from underrepresented groups; 6) making diversity a priority in the recruitment process; and 7) become involved in training and service that promotes equity. Among the participants who reported taking action in their clinical practice due to the course, we identified five types of actions implemented with illustrative quotes in Table 3. 1) engage in reflective practice to think of patients in the broader sociocultural contexts; 2) provide empathetic listening when seeing patients and validating patients’ experiences and perspectives; 3) actively advocate for better access to care for patients of color; 4) improve the policies of practice by evaluating for implicit bias and seeking data; and 5) support antiracist social causes through participation and donations.

Table 3. Self-reported Actions Taken that Participants Attributed to Course Content.

EXAMPLES ACTIONS: TEACHING
Themes Illustrative Quotes
1. Add implicit bias education in curriculum “Broadly shifted with increased awareness and have added curriculum on directly addressing micro and macro aggressions.”
2. Initiate discussions on implicit bias “I'm more likely to bring this up as a topic for discussion with students, since I feel empowered to know there is literature/science out there to back up associations.”
3. Teach students the impact of implicit bias on health disparities “It has been essential in helping students to understand why we shouldn't be surprised by the racial and social differences in outcomes.”
4. Be more aware of own biases when engaged in in mentorship “I am more aware of my own biases when providing additional resources and letters of recommendations for students.”
5. Elevate the voices of underrepresented groups " I have been more cognizant of elevating the voices of individuals from underrepresented groups.”
6. Prioritize diversity in recruitment “Recently hired a research assistant and care was taken to make diversity a priority.”
7. Involve in training and service that promote equity “Yes, it has made me more passionate about this and I am now serving on our diversity/inclusion committee and helping with an anti-racism curriculum for our college.”
EXAMPLES ACTIONS: PRACTICE
Themes Illustrative Quotes
1. Engage in reflective practice “It has altered my perspective on some patients' presentations. For instance, why they are seeking certain resources that others may not have. It has compelled me to take additional steps in some cases in an attempt to secure resources that I may not have previously.”
2. Provide empathetic listening “I have been able to validate the experiences more of patients I have from marginalized groups.”
3. Advocate for patients of color “I aim to dispel implicit bias on Patients of Color as I promote access to care during the COVID pandemic. I advocate more testing and treatment when indicated in this population since they are adversely affected by COVID.”
4. Improve the policies of practice “Again, I think the overall increased trainings in implicit bias have given me more awareness and confidence in recognizing implicit bias in interpersonal interactions that I am involved with and observe. Also, when planning and implementing new practice policies, I think implicit bias is more part of the discussion.”
5. Support antiracist social causes “I have participated in campus wide events designed to improve equity, increased donations targeted to be actively antiracist and continued to provide whatever support to trainees that I can while working remotely.”

Self-Reported impact of course on teaching and practice, implicit and explicit bias and stages of change

With the exception of implicit race bias and impact on teaching, there were no significant differences in stages of change, impact of course and implicit race and gender bias scores, with IAT scores for each stage being similar. Table 4. For impact of the course on teaching, there were significant differences in stages of change (p= 0.036) on impact of the course by participant implicit race bias. Participants who reported that the course had no impact on teaching had no implicit race bias (mean=0.12, SD 0.49, those who were in the Contemplation stage held slight implicit pro-White bias (mean=0.30, SD 0.34), and those who reported taking Action (behavior change) due to the course held moderate implicit pro-White bias (mean=0.44, SD 0.38). There were no differences in impact of the course on teaching and practice based on participants’ explicit race bias which showed little explicit race bias. Table 5. Explicit gender bias scores for all stages of change and for “no impact” revealed strong associations of male with career and female with family.

Table 4. Self-reported Impact of Course on Teaching and Practice, Implicit Bias Scores and Stages of Change (N=56).

TEACHING
Implicit Race IAT Implicit Gender IAT
Stage of Change N Mean (SD) a P-value b Stage of Change N Mean (SD) a P-value
No 19 0.12 (0.49) 0.036 No 19 0.33 (0.32) 0.675
Contemplation 14 0.30 (0.34) Contemplation 14 0.25 (0.34)
Determination 2 0.80 (0.28) Determination 2 0.52 (0.38)
Action 17 0.44 (0.38) Action 17 0.36 (0.28)
Maintenance 4 -0.11 (0.56) Maintenance 4 0.45 (0.37)
PRACTICE
Implicit Race IAT Implicit Gender IAT
No 33 0.25 (0.48) 0.988 No 33 0.31 (0.31) 0.903
Contemplation 7 0.32 (0.34) Contemplation 7 0.37 (0.34)
Determination 2 0.30 (0.30) Determination 2 0.33 (0.44)
Action 11 0.31 (0.48) Action 11 0.40 (0.34)
Maintenance 3 0.18 (0.61) Maintenance 3 0.23 (0.22)

a. For implicit measures, a positive score favors White, male with career. A negative score favors Black, female with family. Interpretation of implicit measures: >-0.15 and < 0.15- little to no preference, > ±0.15 and 0.35 -slight preference, >±0.35 and 0.65 - moderate preference, > ±0.65 - strong preference

b. One-way between group analysis of variance (ANOVA) tests.

Table 5. Self-reported Impact of Course on Teaching and Practice, Explicit Bias Scores and Stages of Change (N=56).

TEACHING
Explicit Race Explicit Gender: Career Explicit Gender: Family
Stage of Change N Mean (SD) a P-value b Stage of Change N Mean (SD) a P-value b Stage of Change N Mean (SD) a P-value b
No 19 0.00 (0.58) 0.505 No 19 0.63 (1.10) 0.211 No 19 -1.37 (0.90) 0.514
Contemplation 14 0.07 (1.07) Contemplation 14 0.69 (1.10) Contemplation 14 -1.00 (0.96)
Determination 2 1.00 (0.00) Determination 2 0.50 (0.71) Determination 2 -0.50 (0.71)
Action 17 0.19 (0.75) Action 17 0.71 (0.85) Action 17 -0.88 (0.93)
Maintenance 4 0.00 (0.00) Maintenance 4 2.00 (1.41) Maintenance 4 -1.00 (1.41)
PRACTICE
Explicit Race Explicit Gender: Career Explicit Gender: Family
No 33 0.06 (0.72) 0.242 No 33 0.66 (1.15) 0.6106 No 33 -0.94 (0.93) 0.473
Contemplation 7 0.29 (0.76) Contemplation 7 1.14 (1.07) Contemplation 7 -1.14 (1.07)
Determination 2 -1.00 (0.00) Determination 2 0.00 (1.41) Determination 2 -0.50 (0.71)
Action 11 0.27 (0.90) Action 11 1.00 (0.77) Action 11 -1.36 (1.03)
Maintenance 3 0.33 (0.58) Maintenance 3 0.67 (1.15) Maintenance 3 -1.67 (0.58)

a. For explicit measures, a positive score favors White, male with career. A negative score favors Black, female with family. Interpretation of explicit measures: >-0.15 and < 0.15- little to no preference, > ± 0.15 and 0.35 -slight preference, >± 0.35 and 0.65 - moderate preference, > ± 0.65 - strong preference.

b. One-way between group analysis of variance (ANOVA) tests.

Discussion

To our knowledge this is the first study to measure whether there were lasting effects of brief, online health equity/implicit bias education after one year, implicit and explicit race and gender biases and participant reports on the impact of the course on their teaching and practice. One year after completing a brief health equity/implicit bias course, we found that 42 (75.0%, and 35.3% of the original sample) answered yes, the course had an impact, to at least one of the four questions, with 62.5% of study responders reporting that the course had an impact on their teaching and 41.1% reporting an impact on their practice. Our study is unique in that participants directly attributed course content to their self-reported increased awareness of bias and reports of actions they took, suggesting that there is potential for brief health equity/implicit bias education to impact specific activities in a wide range of primary care work settings found in academic medicine and other healthcare environments.

Applying a Stages of Change Model 23 allowed us to assess responses that report no change, increased awareness (Contemplation), or new actions and behaviors taken directly attributed to the course. In the model 23 , Contemplation is a pre-action stage. Participants in this study who demonstrated the Contemplation stage most often expressed a new awareness of health equity and bias in healthcare. Participants who demonstrated the Action stage described specific self-reported actions taken in their teaching and practice that they attributed to the course content. Our results suggest that it may take time for the effects of such education to manifest. For those who reported no impact, more ongoing exposure to implicit bias/health equity education may be needed, including direct skills building content into implicit bias education which has been shown to enhance the impact of implicit bias course content, (Gonzales et al., 2022- Twelve Tips). In addition, there may be a portion of clinicians who just are not impacted by this type of education. This is an area that warrants further exploration.

Our sample held implicit and explicit biases, similar to bias in the general population and other clinicians 5, 20, 27 . We found that strength of implicit race bias resulted in differences in impact of the course on teaching. Those who reported no impact of the course on their teaching held no implicit race bias, which may signify that was no impact because they had already incorporated the content of the course into their teaching and practice. Participants who reported taking action held moderate implicit Pro-White bias, the strongest race bias reported by stage of change. Perhaps those with greater implicit pro-White bias were motivated by the course content to take action in their teaching, compared to those who held no bias and reported no impact of the course. How implicit and explicit biases affect effectiveness and application of health equity/implicit bias education to teaching and practice is an area for further study.

Of particular interest are the self-reported actions (change in behavior) participants took over the course of the year that they directly attributed to the course. Few, if any, studies on effects of health equity/implicit bias education for academic clinicians have reported specific actions taken one year after a brief health equity/implicit bias course that learners directly attributed to course content. Much of the change reported in the literature is anticipatory. One study that evaluated training for residents and faculty found that six months later participants found the training increased commitment to addressing bias and institutional vigilance regarding implicit bias 28 . Another study examined impact of health equity rounds on racism and implicit bias in patient care and found that the majority of participants reported that the health equity rounds would impact their future clinical practice 29 .

This study occurred during a significant period in U.S. history. Between the initial study which ended in 2019 and the follow-up in late 2020, were the start of the COVID-19 pandemic, and the George Floyd and other police murders which reinvigorated social justice and Black Lives Matter movements. Study participants were healthcare professionals who experienced the pandemic from both the perspective of health care provider, teacher, and citizen. During the year 2020, existing societal inequalities were reported upon by the media and inescapable. While we were not able to separate the effects of the course, the effects of the pandemic and renewed focus on social inequities, we specifically asked participants whether “ the content of the course impacted your teaching and/or mentoring” and asked separately about impact of the course “ due to the COVID-19 pandemic, the current social justice and equality movement, or current healthcare policy debate.” The relevance of the course may have been magnified by social events occurring outside of academic medicine. We do not know whether or how these societal forces contributed to the impact of this specific health equity/implicit bias course or health equity/implicit bias education more broadly. Future research may provide answers to the influence of these forces on clinician education.

Limitations

There are several limitations to our study. Our sample is a convenience sample, not a representative sample. We had a 47.1% response rate, almost one half of our original sample. It is remarkable that participants gave us their time during the height of the COVID-19 pandemic when demands on primary care providers’ time was at a maximum. Participants are academic primary care clinicians who were interested enough in the topic of implicit bias to give us one hour of their time in 2019 and another 30–45 minutes of their time one year later to participate in the current study. This sample is likely skewed toward clinicians who are not resistant to learning about racism, social determinants of health, and implicit bias in healthcare. In addition, it is likely that many participants were already well-versed in the topic of implicit bias in healthcare and as such may not be representative of the primary care clinician population. Reliability of the IAT has been questioned; however, we used best practices in design and implementation of the IAT in research and explicit measures that adjust for specific critiques of the IAT 17, 30 . For example, we used IATs in a single-session, which is considered adequately reliable for reporting group differences but not for individual diagnostic purposes, we used standard 7-Block IAT construction, target concepts and exemplar stimuli were developed according to best practices and our analysis used the standard IAT D score algorithm 18 . For the qualitative analysis, coders who were trained in identifying stages of change using a practical thematic analysis strategy were aware of the study’s purpose which may have had an impact on coding. While 75.0% (n=42) of the Y2 study participants reported lasting effects on teaching and/or practice for at least one question, this represents 35.3% of the original sample from which study participants were drawn. The impact of the course was determined through self-report, which is not considered as reliable as direct objective observation. We do not know how the events of the year 2020 may have influenced participants’ responses. Although implicit and explicit bias measures were collected in the baseline study one year prior, research shows that these measures are fairly stable over time and are not easily mutable 10 . Despite these limitations, this study found that brief health equity/implicit bias education can have lasting effects and, for some, motivate self-reported behavior change. Longitudinal evaluation of the impact of health equity/implicit bias education is recommended.

Implications for education and practice

Brief health equity/implicit bias education is just one component of a systems-level comprehensive education plan, but it is an important one. Brief implicit bias education that is designed for academic clinicians who teach can, for many learners, increase awareness of bias and motivate taking action to address bias and inequity in their teaching and practice environments.

Ethics and consent

The University of Washington Human Subjects Institutional Review Board approved the study as minimal risk [Baseline study approval IRB # 00006978, Modification for follow up IRB #00008382, 11/13/2020]. Participants consented to the original study, and for the follow up study, participants who entered the online survey were first greeted with a new IRB approved consent form that explained the study. (Supplement 2.) To move forward into the study participants had to click an agree-to-participate button.

The University of Washington IRB “compliance is described in University of Washington (UW) Executive Order Number 24, and in UW’s Federal wide Assurance (FWA), UW and its IRBs are guided by the ethical principles in the Belmont Report. In addition, UW HSD and the UW IRBs draw upon a variety of ethical codes, such as the Declaration of Helsinki, the Council for International Organizations of Medical Sciences (CIOMS) and the International Council for Harmonization (ICH) when developing policies.”

Funding Statement

This publication was supported by the Health Resources and Services Administration (HRSA) of the U.S. Department of Health and Human Services (HHS) as part of an award totaling $539,908 with zero percentage financed with non-governmental sources. The contents are those of the author(s) and do not necessarily represent the official views of, nor an endorsement, by HRSA, HHS or the U.S. Government.

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

[version 2; peer review: 1 approved, 3 approved with reservations]

Data availability statement

Underlying data

Due to ethical and security reasons and the need to protect data and participant privacy, we are not able to publicly share the data because we don’t have adequate consent from our participants. In the case of this study, even deidentified transcripts may be identifiable because of specific examples (e.g., specific actions participants reported) and contexts (setting specific information, etc.) participants shared, thus confidentiality may not be protected. Violation of confidentiality is of particular concern in this study due to the sensitive information that participants shared. For example, if a participant were identified, this could result in penalties. This study was approved by the University of Washington Institutional Review Board. For queries related to ethics of data access, contact Marya Kinsler from the UW IRB at 206-543-0471 or maryaj@uw.edu. We may consider requests for sharing de-identified data. In order to share the data on a case-by-case basis, we would need a description of purpose of use, credentials of those individuals who would work with the data, confirmation of ethics training of research team, requestor’s institutional IRB approval for the research, and UW IRB review and approval for secondary use of the data. In addition, we would need to take additional steps to remove any additional sensitive information. Data sharing requests may be granted only if the above conditions are met and the request is feasible. Please contact Drs. Sabin ( sabinja@uw.edu, or Bianca Frogner ( bfrogner@uw.edu) for data sharing requests.

Extended data

Lasting effects of brief health equity/implicit bias education for academic clinicians: From learning to action, https://doi.org/10.17605/OSF.IO/PGS8Q 21

This project contains the following underlying data:

Underlying data file 1: dataset (Lasting Effects dataset) (restricted access)

Underlying data file 2: dataset description (Lasting Effects dataset-codebook), Consent Form, Survey (unrestricted access)

Underlying data file 3: Supplement Table: Sample Non-responders versus Responders Comparisons (N=114)

Software availability

Analysis can be done with open access software R: https://www.r-project.org/

References

  • 1. FitzGerald C, Hurst S: Implicit bias in healthcare professionals: a systematic review. BMC Med Ethics. 2017;18(1): 19. 10.1186/s12910-017-0179-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Williams DR, Cooper LA: Reducing racial inequities in health: using what we already know to take action. Int J Environ Res Public Health. 2019;16(4):606. 10.3390/ijerph16040606 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Kang J, Bennett M, Carbado D, et al. : Implicit bias in the courtroom. UCLA Law Rev. 2012;59(5):1124–1186. Reference Source [Google Scholar]
  • 4. Hall WJ, Chapman MV, Lee KM, et al. : Implicit racial/ethnic bias among health care professionals and its influence on health care outcomes: a systematic review. Am J Public Health. 2015;105(12):e60–76. 10.2105/AJPH.2015.302903 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Maina IW, Belton TD, Ginzberg S, et al. : A decade of studying implicit racial/ethnic bias in healthcare providers using the Implicit Association Test. Soc Sci Med. 2018;199:219–229. 10.1016/j.socscimed.2017.05.009 [DOI] [PubMed] [Google Scholar]
  • 6. Crump A, Al-Jorani MS, Ahmed S, et al. : Implicit bias assessment by career stage in medical education training: a narrative review. BMC Med Educ. 2025;25(1): 137. 10.1186/s12909-024-06319-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Cooper LA, Roter DL, Carson KA, et al. : The associations of clinicians' implicit attitudes about race with medical visit communication and patient ratings of interpersonal care. Am J Public Health. 2012;102(5):979–87. 10.2105/AJPH.2011.300558 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Sabin JA, Greenwald AG: The influence of implicit bias on treatment recommendations for 4 common pediatric conditions: pain, urinary tract infection, attention deficit hyperactivity disorder, and asthma. Am J Public Health. 2012;102(5):988–95. 10.2105/AJPH.2011.300621 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Tsai JW, Michelson CD: Attitudes toward implicit bias and implicit bias training among pediatric residency program directors: a national survey. J Pediatr. 2020;221:4–6. e1. 10.1016/j.jpeds.2020.01.002 [DOI] [PubMed] [Google Scholar]
  • 10. Greenwald AG, Dasgupta N, Dovidio JF, et al. : Implicit-bias remedies: treating discriminatory bias as a public-health problem. Psychol Sci Public Interest. in press April 2022,2022;23(1):7–40. 10.1177/15291006211070781 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Gonzalez CM, Lypson ML, Sukhera J: Twelve tips for teaching implicit bias recognition and management. Med Teach. 2021;43(12):1368–1373. 10.1080/0142159X.2021.1879378 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Willis J, Todorov A: First impressions: making up your mind after a 100–ms exposure to a face. Psychol Sci. 2006;17(7):592–8. 10.1111/j.1467-9280.2006.01750.x [DOI] [PubMed] [Google Scholar]
  • 13. Sabin J, Guenther G, Ornelas IJ, et al. : Brief online implicit bias education increases bias awareness among clinical teaching faculty. Med Educ Online. 2022;27(1): 2025307. 10.1080/10872981.2021.2025307 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Hawkins CB, Lofaro N, Umansky E, et al. : Understanding Implicit Bias (UIB): experimental evaluation of an online bias education program. J Exp Psychol Appl. 2023;29(4):887–902. 10.1037/xap0000469 [DOI] [PubMed] [Google Scholar]
  • 15. Zeidan AJ, Khatri UG, Aysola J, et al. : Implicit bias education and emergency medicine training: step one? Awareness. AEM Educ Train. 2018;3(1):81–85. 10.1002/aet2.10124 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Sukhera J, Watling C: A framework for integrating implicit bias recognition into health professions education. Acad Med. 2018;93(1):35–40. 10.1097/ACM.0000000000001819 [DOI] [PubMed] [Google Scholar]
  • 17. Greenwald AG, Brendl M, Cai H, et al. : Best research practices for using the Implicit Association Test. Behav Res Methods. 2022;54(3):1161–1180. 10.3758/s13428-021-01624-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Greenwald AG, Nosek BA, Banaji MR: Understanding and using the Implicit Association Test: I. An improved scoring algorithm. J Pers Soc Psychol. 2003;85(2):197–216. 10.1037/0022-3514.85.2.197 [DOI] [PubMed] [Google Scholar]
  • 19. Greenwald AG, McGhee DE, Schwartz JL: Measuring individual differences in implicit cognition: the Implicit Association Test. J Pers Soc Psychol. 1998;74(6):1464–1480. 10.1037//0022-3514.74.6.1464 [DOI] [PubMed] [Google Scholar]
  • 20. Nosek BA, Smyth FL, Hansen JJ, et al. : Pervasiveness and correlates of implicit attitudes and stereotypes. Eur Rev Soc Psychol. 2007;18(1):36–88. 10.1080/10463280701489053 [DOI] [Google Scholar]
  • 21.Data Repository: Sabin J, Frogner BK: Lasting effects implicit bias education. 10.17605/OSF.IO/PGS8Q [DOI] [Google Scholar]
  • 22. Braun V, Clarke V: Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101. 10.1191/1478088706qp063oa [DOI] [Google Scholar]
  • 23. Prochaska JO, DiClemente CC: Stages and processes of self-change of smoking: toward an integrative model of change. J Consult Clin Psychol. 1983;51(3):390–5. 10.1037//0022-006x.51.3.390 [DOI] [PubMed] [Google Scholar]
  • 24. Isaac C, Balloun J, Wofford T: Bias literacy for gender equity: a brief intervention. Open J Soc Sci. 2020;8(6):59–71. 10.4236/jss.2020.86006 [DOI] [Google Scholar]
  • 25. Stata statistical software.StataCorp LLC,2025.
  • 26. SPSS for Windows Version 16.0.Version 16.0. SPSS Inc., 2025; Released2007. Reference Source
  • 27. Sabin J, Nosek BA, Greenwald A, et al. : Physicians' implicit and explicit attitudes about race by MD race, ethnicity, and gender. J Health Care Poor Underserved. 2009;20(3):896–913. 10.1353/hpu.0.0185 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Sherman MD, Ricco J, Nelson SC, et al. : Implicit bias training in a residency program: aiming for enduring effects. Fam Med. 2019;51(8):677–681. 10.22454/FamMed.2019.947255 [DOI] [PubMed] [Google Scholar]
  • 29. Perdomo J, Tolliver D, Hsu H, et al. : Health Equity Rounds: an interdisciplinary case conference to address implicit bias and structural racism for faculty and trainees. MedEdPORTAL. 2019;15: 10858. 10.15766/mep_2374-8265.10858 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Nosek BA, Greenwald AG, Banaji MR: The implicit association test at age 7: a methodological and conceptual review. In: J. A. Bargh (Ed.), Automatic processes in social thinking and behavior.Psychology Press,2007;265–292. Reference Source [Google Scholar]
MedEdPublish (2016). 2026 Jan 2. doi: 10.21956/mep.22958.r44891

Reviewer response for version 2

Yannick Eller 1, Mehrdad Heravi 2

I have read the authors’ responses and the revised manuscript. The authors have addressed the principal concerns we raised. The title has been revised to clarify outcomes were self-reported, the methods section now clarifies the sampling and coding process for the qualitative data and the staged coding approach, the statistical analyses have been reworked and the limitations have been expanded to explicitly acknowledge the constraints. No further comments. Thank you to the authors and the editorial team for the thorough and constructive revisions.

Have any limitations of the research been acknowledged?

Yes

Is the work clearly and accurately presented and does it cite the current literature?

Partly

If applicable, is the statistical analysis and its interpretation appropriate?

Partly

Are all the source data underlying the results available to ensure full reproducibility?

Partly

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

No

Are sufficient details of methods and analysis provided to allow replication by others?

No

Reviewer Expertise:

Faculty development; competency-based training and assessment; simulation-based education and patient-safety improvement; interdisciplinary clinical education with application to oncology and imaging. This is an intentionally concise, high-level summary - please consult online data or prior reviews for detailed listings.

We confirm that we have read this submission and believe that we have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

MedEdPublish (2016). 2025 Oct 30. doi: 10.21956/mep.22263.r43316

Reviewer response for version 1

Yannick Eller 1, Mehrdad Heravi 2

The current title’s phrase ‘Lasting effects’ risks implying objectively measured, durable behaviour change; because this manuscript reports participant self-reports rather than direct observation or administrative outcomes, I suggest a gentler phrasing that aligns title and evidence—for example, ‘Self-reported impacts one year after a brief implicit-bias/health-equity course for academic clinicians’ or ‘Participant-reported impacts one year following a brief implicit-bias course’. Either option preserves the emphasis on follow-up while making the evidence base explicit and avoiding overstated causal language.

The term "brief education" in the title may not fully capture the nature and scope of the intervention described in the article. Furthermore, the phrase “health equity” is used broadly, but the study principally measures race- and gender-related implicit attitudes; the title should either reflect that narrower focus or the authors should justify the broader EDI claim. Revising the title with these considerations could enhance reader comprehension and engagement.

The summary is clearly written and provides a concise overview of the study’s scope and findings. However, the Abstract and Introduction currently overstate the strength of the evidence: the manuscript uses baseline IATs and follow-up self-reports, which are best described as Kirkpatrick Level-1 (awareness/reaction) outcomes. It would strengthen the rationale to explicitly state that this study aims not only to explore additional impacts beyond awareness, such as behavioral change, but also to acknowledge the limits of self-report follow-up designs for demonstrating such change. Furthermore, regarding the second research question on the association between clinician implicit and explicit bias and lasting educational effects, the current study’s results appear to align more closely with the first research question; the manuscript should therefore clarify how the quantitative and qualitative strands were intended to be integrated to address the second question.

The methods section is well articulated and clearly describes how data were gathered and analyzed, providing transparency and rigor in the study’s approach. That said, the manuscript’s repeated use of “multi-method” is not currently justified: the authors should either describe a prespecified mixed-methods design (and how integration occurred) or relabel the study as a quantitative baseline with a qualitative follow-up. The methodology appears to be more directly aligned with addressing the first research question regarding the overall effects of the education program. It is less clear how the methods specifically support the investigation of the second question related to the association between clinicians' implicit and explicit bias and lasting educational effects. Please clarify sampling logic for the follow-up, who coded the qualitative responses, how many responses were double-coded, intercoder reliability (or how coder agreement was handled), and how saturation was judged. Clarifying this alignment in the methods would strengthen the coherence and focus of the study design.

The results are clearly presented, with well-organized tables and text that effectively summarize the key findings without over-interpretation. The clarity of the presentation allows readers to readily understand the outcomes of the study, supporting transparency and comprehension.

The discussion is well supported by the study’s results, effectively integrating findings with relevant context. Still, the manuscript would benefit from a clearer limitation statement that distinguishes self-reported actions from objectively observed behaviour change (Kirkpatrick Level 3) and that proposes concrete Level 3/4 follow-up designs (for example, matched-control or stepped-wedge trials, objective audits, or patient-level outcome tracking). A particularly positive aspect is the acknowledgment of the COVID-19 pandemic and local social activities as potential influences on the study outcomes, which demonstrates thoughtful consideration of external factors that may impact implicit bias education and its effects.

The take-home message is concise and effectively summarizes the key findings and significance of the study. I would, however, soften language that implies verified “lasting effects” and reframe conclusions to reflect participant reports of impact rather than definitive evidence of behaviour change. Additionally, the references are well curated, relevant, and support the arguments presented throughout the article.

Evaluation

Quality of science: X_lower 50% __upper 50% __top 25% __top 10%

Impact/significance/applicability: __lower 50% X_upper 50% __top 25% __top 10%

Originality/novelty: __lower 50% X_upper 50% __top 25% __top 10%

Absolute score: 58% - promising, but requires substantive reframing, additional methodological transparency, and reanalysis.

Verdict: Revise & Resubmit.”

Have any limitations of the research been acknowledged?

Yes

Is the work clearly and accurately presented and does it cite the current literature?

Partly

If applicable, is the statistical analysis and its interpretation appropriate?

Partly

Are all the source data underlying the results available to ensure full reproducibility?

Partly

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

No

Are sufficient details of methods and analysis provided to allow replication by others?

No

Reviewer Expertise:

Faculty development; competency-based training and assessment; simulation-based education and patient-safety improvement; interdisciplinary clinical education with application to oncology and imaging. This is an intentionally concise, high-level summary - please consult online data or prior reviews for detailed listings.

We confirm that we have read this submission and believe that we have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however we have significant reservations, as outlined above.

MedEdPublish (2016). 2025 Dec 3.
Janice Sabin 1

Thank you for your comments. Please see the comments to reviewers 1, 2, and 3 in addition to these. Revision: We have changed the title to: Self-reported impacts one year after a brief health equity/implicit bias course for academic clinicians   Either option preserves the emphasis on follow-up while making the evidence base explicit and avoiding overstated causal language.

 The term "brief education" in the title may not fully capture the nature and scope of the intervention described in the article. Furthermore, the phrase “health equity” is used broadly, but the study principally measures race- and gender-related implicit attitudes; the title should either reflect that narrower focus or the authors should justify the broader EDI claim. Revising the title with these considerations could enhance reader comprehension and engagement.

The summary is clearly written and provides a concise overview of the study’s scope and findings. However, the Abstract and Introduction currently overstate the strength of the evidence: the manuscript uses baseline IATs and follow-up self-reports, which are best described as Kirkpatrick Level-1 (awareness/reaction) outcomes.  It would strengthen the rationale to explicitly state that this study aims not only to explore additional impacts beyond awareness, such as behavioral change, but also to acknowledge the limits of self-report follow-up designs for demonstrating such change. Furthermore, regarding the second research question on the association between clinician implicit and explicit bias and lasting educational effects, the current study’s results appear to align more closely with the first research question; the manuscript should therefore clarify how the quantitative and qualitative strands were intended to be integrated to address the second question.

  Revision: We added to the limitations- The impact of the course was determined through self-report, which is not considered as reliable as direct objective observation. (p19)

  The methods section is well articulated and clearly describes how data were gathered and analyzed, providing transparency and rigor in the study’s approach. That said, the manuscript’s repeated use of “multi-method” is not currently justified: the authors should either describe a prespecified mixed-methods design (and how integration occurred) or relabel the study as a quantitative baseline with a qualitative follow-up. The methodology appears to be more directly aligned with addressing the first research question regarding the overall effects of the education program. It is less clear how the methods specifically support the investigation of the second question related to the association between clinicians' implicit and explicit bias and lasting educational effects. Please clarify sampling logic for the follow-up, who coded the qualitative responses, how many responses were double-coded, intercoder reliability (or how coder agreement was handled), and how saturation was judged. Clarifying this alignment in the methods would strengthen the coherence and focus of the study design.

  Revision: This mixed-method, exploratory study returned to a sample of academic primary care clinicians who completed a survey and online health equity/implicit bias education, Implicit Bias in the Clinical and Learning Environment 14 , between September 2019 and December 2019, which we refer to as our baseline study, course publicly available at [ https://depts.washington.edu/somalt/implicitbias-pi/story.html ].  (p5)   Participants who completed the baseline study were invited to participate in the current follow up study to evaluate lasting effects of the course. The follow up study was conducted between December 2020 and March 2021. (p5)   Clarification of coding Revision: We utilized a rapid practical thematic analysis approach for analysis of  responses to the four reflective questions that aligns with the guide developed by Braun & Clarke [GG1]  [GG2]  .(new CITE), We used the Prochaska & DiClemente Stages of Change Model of Behavior Change guide 24 which has previously been used to categorize qualitative data from bias interventions, as our analytical framework 25 . The stages of change categories are; Precontemplation (no intention to change), Contemplation (awareness of a problem,), Determination (preparing to change), Action (changed behavior), and Maintenance (maintaining new behavior) 24 . Study team members (JS, GG, KM, WB, BWY, BF) independently coded a subset of reflective question responses according to the Stages of Change model. The study team then collaboratively reviewed and finalized codes through an iterative process. Once codes were finalized, one team member (GG) applied the codes to the remaining questions and responses. All team members collectively defined and finalized themes and sub-themes within each stage of the model. We analyzed responses using Microsoft Excel, with response by participant in separate cells and separate sheets. Stages of change were coded ordinally, with precontemplation scoring a 0, contemplation a 1, determination a 2, action a 3, and maintenance a 4. (p7)

MedEdPublish (2016). 2025 Sep 4. doi: 10.21956/mep.22263.r42790

Reviewer response for version 1

Cristina M Gonzales 1

I appreciate the opportunity to review this important work on a construct that is of great interest to me. I commend the authors on seeking to investigate the effect of their intervention beyond the immediate post-intervention stage. Overall, I am pleased to see the authors striving to link implicit bias instruction to educationally and clinically meaningful outcomes. I do have some significant concerns that I detail below, and offer suggestions in a collegial spirit.

Abstract:

Results: The findings are all self-reported. In the abstract (and throughout the manuscript as detailed below) the authors over-state their findings. I am unable to see how the numbers add up to 63.8% of participants having taken at least one action, even just accounting for the sample size of responders (N=56). I calculate 50% but and even then, how did the authors account for the same participant taking an action in both their teaching and their clinical practice? Is this double counting accounted for? Regardless of what the accurate number should be, once that is addressed I suggest the authors revise the last sentence in the results so that it reads…(self-reported actual behavior change)…given no actual behavior change was objectively observed.

Conclusion: Please add self-reported to this text as well. The final sentence is an overstatement of the findings. There is no evidence that any actions improved either teaching or practice. The participants who reported taking action reported applying what they learned to their teaching and practice (and even then, it would be a stretch to say that given we do not know the content of what the training).

Keywords: separate out implicit bias and explicit bias

Introduction:

For the research question- what was answered was lasting self-reported effects. Suggest revising the research question to reflect what was answered.

In the section Implicit Bias Education: the first sentence is confusing as written.

The IBRM framework was not developed or described by Sukhera and Watling in 2018. That framework uses the term recognition to mean awareness, and gives guidance on raising awareness in self and others about implicit bias, and guidance on integrating it implicit bias instruction into health professions education. However, there is no skills-based component to it. IBRM actually emerged from Gonzalez’s lab (the exact term of implicit bias recognition and management—IBRM—has been written in grant application from that lab since 2011). In 2020 Gonzalez partnered with Sukhera and Watling to overlay IBRM onto transformative learning theory, and in 2021 Gonzalez partnered with Lypson and Sukhera to write a 12 tips paper operationalizing the IBRM framework into actual teaching practice. Please use the latter reference (2021) and if space allows the former and revise the sentence to accurately reflect the development of the IBRM framework.

Methods:

For the IAT, the D scores reported by Project Implicit are not interpreted in the same way as Cohen’s d scores. This will be confusing to any reader who is familiar with D score reporting with IATs. Please revise to reflect the D scores (which are continuous variables reported as categories of neutral, moderate, and strong preferences for ease of interpretation) with the appropriate cutoffs (+/- 0.15, +/- 0.35, and +/- 0.65, respectively).

Results:

As above, I do not understand how the authors reached the 63.8% value and the potential for double counting. The values in the paragraph under table 2 do not add up. Suggest increased clarity of reporting in the calculations.

Further, more than a third and more than half of participants did not report an impact of the course on their teaching or practice, respectively. These large values require the authors to soften the interpretations and conclusions, as they are currently overstated.

In Actions stage of change: #4 is not an action. Increased awareness without skill development can actually have negative consequences such as avoidance, interracial anxiety, and more. If they have awareness of their biases while mentoring trainees, what action are they actually taking? This also gets back to the problem of claiming improvement in teaching and practice. Suggest moving #4 to the contemplation stage and recalculating the numbers in the results section.

Discussion:

The first sentence overstates the findings. Only some participants experienced lasting effects (and for practice, actually less than half did).

At the end of the first paragraph, although the results may suggest that it may take time for the effects of such education to lead to changes in behaviors, the fact that more than one third and more than one half of participants did not experience changes in behavior in teaching or practice, respectively may also suggest the following: some participants may need more instruction, alternative methods, or may simply be unreachable due to the nature of the construct. One can't assume more time would have given them the opportunity to change, especially for those who said it had no impact.

Limitations: The reported long-lasting impacts of the intervention affected a subset of respondents who were a subset of the original sample who participated in the intervention. The authors need to engage more fully with that limitation. If the authors are unable to account for double counting in their calculations (i.e. the same participant took action in both their teaching and practice as a result of the intervention), then this needs to be discussed in the limitations.

Have any limitations of the research been acknowledged?

Partly

Is the work clearly and accurately presented and does it cite the current literature?

Partly

If applicable, is the statistical analysis and its interpretation appropriate?

No

Are all the source data underlying the results available to ensure full reproducibility?

Yes

Is the study design appropriate and is the work technically sound?

Yes

Are the conclusions drawn adequately supported by the results?

Partly

Are sufficient details of methods and analysis provided to allow replication by others?

Partly

Reviewer Expertise:

My lab developed the framework of Implicit Bias Recognition and Management and we develop and evaluate skills-based, behavioral interventions to address the impact of implicit bias on clinical encounters.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

MedEdPublish (2016). 2025 Dec 3.
Janice Sabin 1

Thank you for your comments. We clarified the impact and numbers, identifying unique participant responses. Revision: Across all four questions, 42 (75.0%) unique participants reported “yes”, the course had an impact on their teaching/mentoring and/or practice to at least one question, representing 35.3% of the original sample. Table 2. One year after taking the course, 35 participants (62.5%) reported that the content of the course had an impact on their teaching and 23 participants (41.4%) reported that the course had an impact on their clinical practice. (Table 2). Those assigned to the Maintenance stage (n=4) reported that they were already engaged in strategies to manage implicit bias in their setting. (p10) Revision: We added to the limitations section: For the qualitative analysis, coders who were trained in identifying stages of change using a practical thematic analysis strategy were aware of the study’s purpose which may have had an impact on coding. While 75.0% (n=42) of the Y2 study participants reported lasting effects on teaching and/or practice for at least one question, this represents 35.3% of the original sample from which study participants were drawn. (p19) Revision: We adjusted our description to read: self-reported awareness and actions throughout. We added to the limitations: The impact of the course was determined through self-report, which is not considered as reliable as direct objective observation. (p19) Regardless of what the accurate number should be, once that is addressed I suggest the authors revise the last sentence in the results so that it reads…(self-reported actual behavior change)…given no actual behavior change was objectively observed.

Conclusion: Please add self-reported to this text as well. The final sentence is an overstatement of the findings. There is no evidence that any actions improved either teaching or practice. The participants who reported taking action reported applying what they learned to their teaching and practice (and even then, it would be a stretch to say that given we do not know the content of what the training).

Keywords: separate out implicit bias and explicit bias: Done

Introduction:

For the research question- what was answered was lasting self-reported effects. Suggest revising the research question to reflect what was answered. Revised research questions: Our research questions were 1. Does health equity/implicit bias education have clinician’s self-reported lasting effects on teaching and practice, and if so, how? and 2. Is clinician implicit and explicit bias associated with self-reported lasting effects of health equity/implicit bias education? (p5)

  In the section Implicit Bias Education: the first sentence is confusing as written.   Revision: Implicit Bias Education   Participants engaged in brief online health equity/implicit bias education as part of the baseline study. (p5)

The IBRM framework was not developed or described by Sukhera and Watling in 2018. That framework uses the term recognition to mean awareness and gives guidance on raising awareness in self and others about implicit bias, and guidance on integrating it implicit bias instruction into health professions education. However, there is no skills-based component to it. IBRM actually emerged from Gonzalez’s lab (the exact term of implicit bias recognition and management—IBRM—has been written in grant application from that lab since 2011). In 2020 Gonzalez partnered with Sukhera and Watling to overlay IBRM onto transformative learning theory, and in 2021 Gonzalez partnered with Lypson and Sukhera to write a 12 tips paper operationalizing the IBRM framework into actual teaching practice. Please use the latter reference (2021) and if space allows the former and revise the sentence to accurately reflect the development of the IBRM framework. Revision: Although the course was developed prior to publication of the Gonzalez et al., (2021) (add Cite #11) framework of operationalizing implicit bias recognition and management (IBRM) education and the Sukhera and Watling (2018) (add CITE #16), operationalization of a theoretical framework for implicit bias education, the course incorporated many of the features of these models such as creating a safe environment, content on the science of implicit bias, normalizing bias, evidence of bias in the learning environment and practice, and increasing awareness of implicit bias 16 . Course content included: the history of racism in medicine, information about the social determinants of health, evidence of discrimination in healthcare, the science of implicit bias, and evidence about how implicit bias manifests in clinical care and the learning environment. Although skills practice was not part of the course, participants were provided with actionable strategies to mitigate the impact of bias in teaching and practice (p 6)

Methods:

For the IAT, the D scores reported by Project Implicit are not interpreted in the same way as Cohen’s d scores. This will be confusing to any reader who is familiar with D score reporting with IATs. Please revise to reflect the D scores (which are continuous variables reported as categories of neutral, moderate, and strong preferences for ease of interpretation) with the appropriate cutoffs (+/- 0.15, +/- 0.35, and +/- 0.65, respectively). To avoid confusion we removed the statistic Cohen’s d and revised our analysis and IAT interpretation: We added IAT and explicit measures mean, SD, p value based upon differing from “0” and the following interpretation was added to the tables: Revision: For implicit and explicit measures, a positive score favors White, and male with career, and female with family. A negative score favors Black, female with family. Interpretation of implicit and explicit measures: >-0.15 and < 0.15- little to no preference, > ±0.15 and 0.35 -slight preference, >±0.35 and 0.65 - moderate preference, > ±0.65 - strong preference .   Results:

As above, I do not understand how the authors reached the 63.8% value and the potential for double counting. The values in the paragraph under table 2 do not add up. Suggest increased clarity of reporting in the calculations.

Further, more than a third and more than half of participants did not report an impact of the course on their teaching or practice, respectively. These large values require the authors to soften the interpretations and conclusions, as they are currently overstated. Revision: Across all four questions, 42 (75.0%) unique participants reported that “yes”, the course had an impact on their teaching/mentoring and/or practice to at least one question, representing 35.3% of the original sample. Table 2. One year after taking the course, 35 participants (62.5%) reported that the content of the course had an impact on their teaching/mentoring and 23 participants (41.4%) reported that the course had an impact on their clinical practice. (Table 2). Those assigned to the Maintenance stage (n=4) reported that they were already engaged in strategies to manage implicit bias in their setting. Participants who responded “no” to a question were not assigned a stage of change (SOC) because they reported that the course did not have an impact on their teaching/mentoring and practice. For the two questions about teaching and practice relative to the COVID-19 pandemic and the current social justice and equality movements, 26 participants (46.4%) reported that the course impacted their teaching and 16 participants (28.6%) reported that the course impacted their practice. Across all four questions, participants reported 34 instances of increased awareness of bias (Contemplation) and 47 self-reported actions taken that they attributed to the content of the course. (p10) In Actions stage of change: #4 is not an action. Increased awareness without skill development can actually have negative consequences such as avoidance, interracial anxiety, and more. If they have awareness of their biases while mentoring trainees, what action are they actually taking? This also gets back to the problem of claiming improvement in teaching and practice. Suggest moving #4 to the contemplation stage and recalculating the numbers in the results section.

  # 4 “I am more aware of my own biases when providing additional resources and letters of recommendations for students.” Participants’ responses were very straightforward for the most part. In our qualitative analysis meetings coders discussed this particular response and decided that it was an action, not merely a general increased awareness but applying increased awareness while engaging in the action of “providing” resources and letters of recommendation. We again reviewed this response and stand by our coding of action on this response. This was a direct uptake of the course content, which presented evidence of gender bias in letters of recommendation, how to identify the bias, and its impact for women in medicine. We clarified our analysis in Table 3. by explaining in the theme that participants were engaging in mentoring. Revision: Table 3, theme # 4: Aware of own biases while engaging in mentorship activities   Discussion:

The first sentence overstates the findings. Only some participants experienced lasting effects (and for practice, actually less than half did). We reviewed responses to the four questions and found that 42 unique participants answered yes to at least one question, thus 75% of participants were impacted by the course. Revision: To our knowledge this is the first study to measure whether there were lasting effects of brief, online health equity/implicit bias education after one year, implicit and explicit race and gender bias es and participant reports on the impact of the course on their teaching and practice. One year after completing a brief health equity/implicit bias course, we found that 42 (75.0%, and 35.3% of the original sample) answered yes, the course had an impact , to at least one of the four questions, with 62.5% of study responders reporting that the course had an impact on their teaching and 41.1% report ing an impact on their practice. Our study is unique in that participants directly attributed course content to their self-reported increased awareness of bias and reports of actions they took, suggesting that there is potential for brief health equity/implicit bias education to impact specific activities in a wide range of primary care work settings found in academic medicine and other healthcare environments. (p17)

At the end of the first paragraph, although the results may suggest that it may take time for the effects of such education to lead to changes in behaviors, the fact that more than one third and more than one half of participants did not experience changes in behavior in teaching or practice, respectively may also suggest the following: some participants may need more instruction, alternative methods, or may simply be unreachable due to the nature of the construct. One can't assume more time would have given them the opportunity to change, especially for those who said it had no impact. Revision: For those who reported no impact, more ongoing exposure to implicit bias/health equity education may be needed, including direct skills building content into implicit bias education which has been shown to enhance the impact of implicit bias course content, (Gonzales et. al., 2022- Twelve Tips). In addition, there may be a portion of clinicians who just are not impacted by this type of education.This is an area that warrants further exploration. (p17)

Limitations: The reported long-lasting impacts of the intervention affected a subset of respondents who were a subset of the original sample who participated in the intervention. The authors need to engage more fully with that limitation. If the authors are unable to account for double counting in their calculations (i.e. the same participant took action in both their teaching and practice as a result of the intervention), then this needs to be discussed in the limitations. We clarified the numbers. See above

MedEdPublish (2016). 2025 Sep 2. doi: 10.21956/mep.22263.r42787

Reviewer response for version 1

Dipesh P Gopal 1

Firstly I welcome the authors' tireless efforts to change the norm. The write-up is comprehensive with clear mention of the methods, sample and limitations.

IAT testing is used before and after but the non-significant nature of the after effects of the teaching in not in the abstract. "We found no statistically significant correlation between implicit or explicit bias and impact on teaching or practice ( Table 5). We found a weak association between implicit race and explicit race measures indicating that these are related but distinct measures of race and gender biases. Implicit and explicit race and gender were associated suggesting that individuals who have bias in one area may hold implicit bias in other areas."

As we know self-reported change does not necessarily reflect change. Would a vignette study have been helpful here?

The conclusions are misleading: "There were lasting effects of implicit bias education on participants’ teaching and practice. Brief implicit bias education moved clinicians toward taking action to improve their teaching and practice. "

This study further adds to the growing literature that IAT testing / bias training does not significantly change views / minds / actions and this should be acknowledged in the study. There is even some research that bias training worsens existing biases.

Have any limitations of the research been acknowledged?

Yes

Is the work clearly and accurately presented and does it cite the current literature?

Yes

If applicable, is the statistical analysis and its interpretation appropriate?

Yes

Are all the source data underlying the results available to ensure full reproducibility?

Yes

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

Partly

Are sufficient details of methods and analysis provided to allow replication by others?

Yes

Reviewer Expertise:

Inequalities, have reviewed the literature on bias and published a well cited review. I am familiar with bias testing and the limits of IAT testing.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

MedEdPublish (2016). 2025 Dec 3.
Janice Sabin 1

Thank you for your comments. We clarified that the implicit and explicit measures were one-time baseline measures to characterize the sample. Revision: The IAT and explicit measures were used as a one-time, baseline measure of bias to characterize the sample and was not used as an IAT with feedback intervention to increase awareness of participants’ personal bias. (p6) "We found no statistically significant correlation between implicit or explicit bias and impact on teaching or practice ( Table 5). We found a weak association between implicit race and explicit race measures indicating that these are related but distinct measures of race and gender biases. Implicit and explicit race and gender were associated suggesting that individuals who have bias in one area may hold implicit bias in other areas."

  We changed our analysis from Pearson Product Moment correlation tests to a one-way between group analysis of variance (ANOVA) tests to better understand course impact on teaching and practice in relation to implicit and explicit bias scores and stages of change categories. (see Tables 4, 5)   As we know self-reported change does not necessarily reflect change. Would a vignette study have been helpful here?   A vignette study was beyond the scope of this study but would be useful in future studies.

  For all mentions of awareness of bias and actions taken we added that they were self-reported throughout the manuscript and added this to the limitations.

  The conclusions are misleading: "There were lasting effects of implicit bias education on participants’ teaching and practice. Brief implicit bias education moved clinicians toward taking action to improve their teaching and practice. "

  Revision: Abstract Conclusions: This study found that the majority of study participants reported lasting effects of the course on their teaching and/or practice. Brief implicit bias education can impact clinicians’ teaching and practice. (p5)   This study further adds to the growing literature that IAT testing / bias training does not significantly change views / minds / actions and this should be acknowledged in the study. There is even some research that bias training worsens existing biases.   We addressed the topic by adding to this sentence. There is no clear evidence of optimal implicit bias curricula content, or educational evaluation strategies and no irrefutable evidence that implicit bias education impacts teaching and practice 9 (p 4)

MedEdPublish (2016). 2025 Aug 19. doi: 10.21956/mep.22263.r42537

Reviewer response for version 1

Jeffrey Stone 1

This paper reports the results of a study designed to examine the longer term effects of implicit bias education and training among providers over a 1 year period.  It is an important study that will make a contribution to the literature on implicit bias interventions in health care.  I would like to see the research published at some point.  

However, I am  concerned that the current paper over interprets the results in a way that could be misleading.  I offer the following observations and suggestions to clarify the results and improve the impact of the research.  

1.  The "lasting effects" in the title are very limited to a small subsample of those who completed the follow-up.  For example, only 47% (57 of 119) completed the follow up, and of those, only 65% (35 of the 57) said the intervention had an impact on them.  Thus, the claims of lasting effects are for only 29% of those exposed to the intervention, a critical limitation that is not adequately acknowledged.  Moreover, who were the non-respondents?  No information is provided about those who did not complete the follow-up (e.g., the demographics of the respondents suggest attrition was predominantly White males). We have to consider the possibility that the intervention was ineffective on providers who tend to show high bias and are the most resistant to learning about it.  It should be possible to address the non-response rate by reporting the demographics of those who did not respond (this appears possible from the way they collected the pre-intervention data).   It might be possible to address these concerns by, for example, reporting the pre-intervention D-score on the race and gender IATs on the full sample of 119, and comparing it to the subsample of 57 who completed the follow-up.   

2.  The coding of the stages of change were not conducted by coders who were unaware of the study purpose.  I worry that the coding may show more change than objective or naive coders would find.  It might be difficult to rule this out without training naive coders and having them replicate the results.  But the paper could acknowledge this potential limitation and its remedy (which other investigators can test if the data are made available).  

3.    it could be misleading to interpret the zero-order correlation between the bias and change scores as evidence for the effectiveness of the intervention.  The zero correlation could reflect a restriction in range or the unreliability of one or both measures, for example.  The poor reliability of the IAT is well documented, and the paper does not report information about the reliability of the explicit measures used in this study.  The paper should acknowledge and/or address this possible explanation.   

But the introduction did not say anything about whether bias scores measured before the intervention should  correlate with the outcomes.  Assuming the measures are valid and reliable, should there be no correlation, as the paper suggests, or should there be a positive correlation, which would indicate that the most biased providers were the most motivated by the intervention to contemplate or enact change?  Isnt that the goal of the intervention?  To clarify, the paper should provide more explanation of the expected relationship between the pre-intervention measures of bias and the outcomes.

Have any limitations of the research been acknowledged?

Partly

Is the work clearly and accurately presented and does it cite the current literature?

Partly

If applicable, is the statistical analysis and its interpretation appropriate?

No

Are all the source data underlying the results available to ensure full reproducibility?

Partly

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

No

Are sufficient details of methods and analysis provided to allow replication by others?

No

Reviewer Expertise:

Implicit bias in health care

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

MedEdPublish (2016). 2025 Dec 3.
Janice Sabin 1

Thank you for your comments. We have changed the title to be more descriptive of our findings.   The revised title is : Self-reported impacts one year after a brief health equity/implicit bias course for academic clinicians     We also clarify the impact of the course shown in Table 2. We added the following information: Revision: Across all four questions 42 (75.0%) unique participants responded reported that “yes”, the course had an impact on their teaching/mentoring and/or practice to at least one question, representing 35.3% of the original sample. (p10) Revision: We added to the limitations section: While 75.0% (n=42) of the Y2 study participants reported that yes, the course had an impact on teaching/practice, to at least one question, this represents 35.3% of the original sample from which study participants were drawn. (p19) Moreover, who were the non-respondents?  No information is provided about those who did not complete the follow-up (e.g., the demographics of the respondents suggest attrition was predominantly White males). We have to consider the possibility that the intervention was ineffective on providers who tend to show high bias and are the most resistant to learning about it.  See Supplement Table: https://doi.org/10.17605/OSF.IO/PGS8Q 30 It should be possible to address the non-response rate by reporting the demographics of those who did not respond (this appears possible from the way they collected the pre-intervention data).    Revision: Yes, we can address this question. We created a Supplemental Table that compares those from the original sample who did not respond in Year 2. to those who completed the Year 2 survey and report on the table on page 8.  See Supplement Table: https://doi.org/10.17605/OSF.IO/PGS8Q 30 It might be possible to address these concerns by, for example, reporting the pre-intervention D-score on the race and gender IATs on the full sample of 119, and comparing it to the subsample of 57 who completed the follow-up.  See Supplement Table: https://doi.org/10.17605/OSF.IO/PGS8Q 30   We added to the text: We compared those who responded to those who did not and found a few differences in characteristics of participants did and did not participate in the Y2 follow up study. Follow up study sample participants reported significantly less years in practice (16.0 years vs. 21.2 p=0.02), more worked in an academic healthcare system (82.1% vs. 63.2%%, p=0.04), and held stronger explicit gender bias (female/family mean= -1.07 vs. mean= -0.63, p=0.02). Supplement Table, Data repository [ : https://doi.org/10.17605/OSF.IO/PGS8Q 30 ] (p8) 2. The coding of the stages of change were not conducted by coders who were unaware of the study purpose.  I worry that the coding may show more change than objective or naive coders would find.  It might be difficult to rule this out without training naive coders and having them replicate the results.  But the paper could acknowledge this potential limitation and its remedy (which other investigators can test if the data are made available).   Revision: Limitations: For the qualitative analysis, coders who were trained in identifying stages of change using a practical thematic analysis strategy were aware of the study’s purpose which may have had an impact on coding. (p18)

3.    it could be misleading to interpret the zero-order correlation between the bias and change scores as evidence for the effectiveness of the intervention.  The zero correlation could reflect a restriction in range or the unreliability of one or both measures, for example.  The poor reliability of the IAT is well documented, and the paper does not report information about the reliability of the explicit measures used in this study.  The paper should acknowledge and/or address this possible explanation.    (see below) Revision: Limitations: Reliability of the IAT has been questioned, however, we used best practices in design and implementation of the IAT in research that adjust for specific critiques of the IAT.  (Greenwald, 2022, Nosek, 2007) For example, we used 2 IATs in a single-session, which is considered adequately reliable for reporting group differences, we used standard 7-Block IAT construction, target concepts and stimuli were developed according to best practices and analysis used the standard IAT D scoring algorithm.  (p18)

  Revision: Explicit Measures: We used standard explicit measures designed using best practices outlined in Greenwald, et al., 2022) that correspond with the Race IAT.  (p6) [ https://implicit.harvard.edu/implicit/]

  But the introduction did not say anything about whether bias scores measured before the intervention should correlate with the outcomes.  Assuming the measures are valid and reliable, should there be no correlation, as the paper suggests, or should there be a positive correlation, which would indicate that the most biased providers were the most motivated by the intervention to contemplate or enact change?  We changed our analysis from Pearson Product Moment correlation tests to a one-way between group analysis of variance (ANOVA) tests to better understand course impact on teaching and practice in relation to implicit and explicit bias scores and stages of change categories. We examined stages of change and differences of participant implicit and explicit bias scores. (see Tables 4, 5) Revision: With the exception of the impact of implicit race bias on teaching, there were no significant differences by stages of change in impact of course and implicit race and gender bias scores, with IAT scores for each stage being similar and ranging from slight to moderate bias. Table 4 . For impact of course on teaching, there were significant differences in stages of change (p= 0.036) on impact of the course by participant implicit race bias. Participants who reported that the course had no impact on teaching had no implicit race bias (mean=0.12, SD 0.49), those who were in the Contemplation stage held slight implicit pro-White bias (mean=0.30, SD 0.34), and those who reported taking Action (behavior change) due to the course held moderate implicit pro-White bias (mean=0.44, SD 0.38). There were no differences in impact of the course on teaching and practice based on participants’ explicit race biases which showed little to slight explicit race bias. Table 5.  Explicit gender bias scores for all stages of change and for no impact revealed strong associations of male with career and female with family. Table 5. (p17) Isn't that the goal of the intervention?  To clarify, the paper should provide more explanation of the expected relationship between the pre-intervention measures of bias and the outcomes. Revision: This study was exploratory in nature. We did not know how implicit and explicit biases would affect reports of the impact of the course on teaching and practice in a one year follow up. Studies, including the baseline study from which we drew our sample, (add Cite # 14)) have found that brief online implicit bias education can increase bias awareness and intentions to change behavior 13 15 . In this study, we returned to a sample of primary care clinical faculty one year after they took a brief online course and used the method of personal reflection for participants to report on the impact of the course in their teaching and clinical practice over the past year. The aim of this study was to explore whether and how the course had an impact on clinicians’ teaching and practice during the one year following taking the course. (p5)

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Data Availability Statement

    Underlying data

    Due to ethical and security reasons and the need to protect data and participant privacy, we are not able to publicly share the data because we don’t have adequate consent from our participants. In the case of this study, even deidentified transcripts may be identifiable because of specific examples (e.g., specific actions participants reported) and contexts (setting specific information, etc.) participants shared, thus confidentiality may not be protected. Violation of confidentiality is of particular concern in this study due to the sensitive information that participants shared. For example, if a participant were identified, this could result in penalties. This study was approved by the University of Washington Institutional Review Board. For queries related to ethics of data access, contact Marya Kinsler from the UW IRB at 206-543-0471 or maryaj@uw.edu. We may consider requests for sharing de-identified data. In order to share the data on a case-by-case basis, we would need a description of purpose of use, credentials of those individuals who would work with the data, confirmation of ethics training of research team, requestor’s institutional IRB approval for the research, and UW IRB review and approval for secondary use of the data. In addition, we would need to take additional steps to remove any additional sensitive information. Data sharing requests may be granted only if the above conditions are met and the request is feasible. Please contact Drs. Sabin ( sabinja@uw.edu, or Bianca Frogner ( bfrogner@uw.edu) for data sharing requests.

    Extended data

    Lasting effects of brief health equity/implicit bias education for academic clinicians: From learning to action, https://doi.org/10.17605/OSF.IO/PGS8Q 21

    This project contains the following underlying data:

    Underlying data file 1: dataset (Lasting Effects dataset) (restricted access)

    Underlying data file 2: dataset description (Lasting Effects dataset-codebook), Consent Form, Survey (unrestricted access)

    Underlying data file 3: Supplement Table: Sample Non-responders versus Responders Comparisons (N=114)


    Articles from MedEdPublish are provided here courtesy of Association for Medical Education in Europe

    RESOURCES