Skip to main content
Nature Communications logoLink to Nature Communications
. 2025 Sep 9;16:8218. doi: 10.1038/s41467-025-63381-7

Racial stereotypes bias the neural representation of objects towards perceived weapons

DongWon Oh 1, Henna I Vartiainen 2, Jonathan B Freeman 3,
PMCID: PMC12420810  PMID: 40925899

Abstract

Racial stereotypes have been shown to bias the identification of innocuous objects, making objects like wallets or tools more likely to be identified as weapons when encountered in the presence of Black individuals. One mechanism that may contribute to these biased identifications is a transient perceptual distortion driven by racial stereotypes. Here we provide neuroimaging evidence that a bias in visual representation due to automatically activated racial stereotypes may be a mechanism underlying this phenomenon. During fMRI, tools presented after Black face primes induced neural response patterns that exhibited a biased similarity to independent gun images in object-discriminative regions of the ventral temporal cortex involved in the visual perception of objects. Moreover, these neural representational shifts predicted the magnitude of participants’ racial bias, as reflected by differences in response times during weapon identification due to Black versus White face primes. Together, these findings suggest that stereotypes can shape the visual representation of socially-relevant objects in line with preconceived notions, thereby contributing to racially biased responding.

Subject terms: Human behaviour, Social neuroscience


Due to racial stereotypes, innocuous objects (e.g. a tool) can be misperceived as a gun when presented immediately after a Black individual’s face. Here, the authors examine the neural basis of this effect, showing that neural response patterns to tools in visual perception regions become more similar to those typically elicited by guns, contributing to racially biased responding.

Introduction

In August 2022, Donovan Lewis, a 20-year-old Black man, was fatally shot by a police officer with decades of experience in Columbus, Ohio. Police body camera footage showed that the officer fired the shot just moments after opening Lewis’s bedroom door, where Lewis appeared to be holding an object. No weapon was found at the scene; the object was a mere vape pen. Two years prior in the same city, another Black man, Andre Hill, was fatally shot by a police officer as Hill emerged from a garage holding up a cell phone. These tragic incidents are hardly isolated examples, and they reflect a well-documented pattern of disproportionate lethal force against unarmed racial minorities in the US1. A better understanding of the mechanisms that drive such racially biased responses may aid in developing interventions to eliminate them.

Behavioral studies have long shown that racial contexts affect patterns of object identification. Innocuous objects such as tools are more likely to be mistaken as guns in the presence of Black relative to White people. In cases where Black-primed tools are correctly recognized as tools (rather than guns), a measurable delay occurs before people make their identifications25. This weapon identification bias is related to negative stereotypes in the US associated with Black people that center on danger and crime; accordingly, those harboring stronger racial stereotypes have exacerbated weapon identification biases2,68. The literature has generally regarded the weapon identification bias as a form of response priming, whereby a Black person’s face facilitates the motor response associated with a gun categorization due to stereotypical associations. From this perspective, when participants view a Black face followed by an image of an object such as a tool, the visual processing of the object and its perceptual representation are not disrupted. Instead, participants see the object faithfully, but the Black person’s face facilitates the motor response associated with “gun” in an unintentional manner that can be difficult to control9,10.

Current models of social vision and a growing number of studies have pointed to interactions between social cognition and visual perception, whereby stereotypical expectations lead perceptual judgments of faces and their neural representations to better conform to those expectations1113. These effects have been argued to relate to domain-general properties of an adaptive perceptual system and, accordingly, similar behavioral and neural effects have been documented in object recognition14,15. Indeed, the effects of top-down expectations and perceptual ‘priors’ on object representations in ventral-temporal cortex are well described1619. In the context of weapon bias, such findings raise the possibility that stereotypes not only prime the motor system as conventionally theorized9, but that they also may prime the visual system and temporarily affect how an object is perceptually represented. Thus, seeing a Black face may activate a top-down expectation that, during the perception of an innocuous object like a tool, transiently distorts its representation toward “gun”. Once enough bottom-up perceptual processing has unfolded, as part of a competitive process such top-down modulation would give way to a more accurate, stable representation20,21. Using fMRI neural decoding approaches, the present research tests the possibility of this top-down modulation directly and provides evidence for a form of visual confirmation bias that racial stereotypes exert on object identification.

A primary region likely to be a key target of this top-down modulation is the ventral temporal cortex, including regions involved in categorical representations of object stimuli such as the lateral occipital complex (LOC), fusiform gyrus (FG), and posterior middle temporal gyrus (pMTG)2224. While regions such as LOC have been implicated in object representations broadly, processing in the FG and pMTG have been specifically linked to representing graspable object stimuli like tools or weapons in particular, likely due to their strong action-related properties25,26. Moreover, previous research has shown that social-semantic information manifests in the representational structure of regions such as the FG when viewing face stimuli2729. Thus, in the context of a Black face, these ventral temporal representations of tool stimuli may be partly influenced by the learned stereotypical relationship between Black individuals, danger, and weapons, thereby biasing such representations toward “gun”.

In the present research, we identified independent neural patterns related to tool and gun object categories outside of any racial context (i.e., without preceding Black or White face primes), and we compared these against neural activity during an object identification task where tool and gun stimuli were primed by either White or Black faces. Due to a strong Black-gun stereotypical association documented by behavioral studies, we predicted that multi-voxel representations of Black-primed tools would be biased toward the gun category (relative to White-primed tools), and that this neural representational bias would predict participants’ response-time delays in categorizing Black-primed tools successfully as tools, rather than guns. We did not have strong predictions about a possible bias in neural response patterns or response times specifically to gun targets, given the asymmetric nature of the stereotypical association (i.e., White people are not stereotypically associated with tools).

Results

During fMRI, 31 right-handed participants without history of neurological disease or use of psychoactive medications (M age = 23.10 years, SD age = 5.21 years; 22 Asian, 1 Hispanic, 7 White, 1 Multiracial; 21 female, 10 male) underwent three tasks: a gun/tool pattern localizer task, a weapon identification task, and a control identification task. All participants self-reported their race/ethnicity and gender. In the pattern localizer task, participants only saw gun and tool images without any faces, while performing a 1-back task in which they indicated via button-press whether an identical object image had repeated in sequence (used to ensure participant attention) (2 runs) (Fig. 1, Top). The pattern localizer runs provided an independent neural activation pattern for guns and an independent neural activation pattern for tools, without any preceding Black or White face primes.

Fig. 1. Experimental procedures.

Fig. 1

Procedures of the gun/tool pattern localizer task (top), weapon identification task (middle), and control identification task (bottom). In all panels, blue boundaries and lines denote time points during which participant response was collected. Note that while the faces and objects shown represent the categories of stimuli used in the study, they are not the original stimuli. Face stimuli shown are from the Face Research Lab London Set by Jones and DeBruine56, licensed under CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/). All original face and object stimuli used in the study are available for research purposes on the Open Science Framework (https://osf.io/j4n5w/).

In weapon identification task runs, each trial began with a White or Black face (prime), which was followed by a gun or tool image (target); participants categorized the target as either a gun or tool via button-press as quickly and accurately as possible (6 runs) (Fig. 1, Middle). Target gun and tool images were matched on low-level visual properties (luminance and contrast). The Black and White faces were kept in their original form, as unnatural alterations to Black and White faces may artificially mitigate the activation of racial stereotypes that naturally arise during social perception. However, we ensured that Black and White faces were equally visually similar to the gun and tool images used.

Specifically, we used ResNet50, a deep convolutional neural network for image recognition, to analyze the visual features of our stimuli. ResNet50 has been shown to have superior performance in accounting for neural response patterns to images30. For each face prime (Black or White), we calculated its visual similarity to all gun images and to all tool images used. We then compared whether Black faces had stronger gun-vs-tool similarity than White faces did. This network processes images in layers; early layers detect basic visual elements like edges, textures, and simple patterns with minimal abstraction, while later layers recognize more complex and abstract features that help identify objects like handles or barrels. Permutation testing found no statistically significant difference between Black and White faces in terms of their similarity to guns (relative to tools), either at the early visual level (early network layer, p = 0.391) or the complex featural level (final network layer, p = 0.298) (see Methods). In other words, the Black faces used were not inherently more “gun-like” than White faces in their visual characteristics. This ensures that the hypothesized effects cannot be attributed to Black faces sharing more gun-like visual properties than White faces.

In control identification task runs, participants performed the same task as in the weapon identification task runs, except that the target stimuli were control images of leftward vs. rightward diagonal grey line patterns (2 runs) (Fig. 1, Bottom). The control identification task runs are used to ensure that neural-pattern similarity biases observed in the weapon identification task runs do not arise due to physical confounds in the stimuli in the form of some hemodynamic lag or carryover effects from the face stimuli. Specifically, if the hypothesized bias toward “gun” for Black-primed tool images is observed across both the weapon and control identification runs, rather than uniquely in the weapon identification task runs, it would suggest carryover effects from the Black face prime rather than the hypothesized bias. See Methods for full details.

Our neuroimaging analyses examined multi-voxel response patterns to tool and gun images during the weapon and control identification tasks, comparing them against response patterns to the independent tool and gun images (presented in isolation during the initial pattern localizer task). Note that the pattern localizer task, which used the identical object images as the weapon and control identification tasks, proceeded without race ever having been mentioned in the study and prior to the presentation of Black and White faces. The localizer task provided neural patterns for each object category outside any racial context.

Our key measure was how similar/dissimilar each neural response pattern was to the pattern typically observed when viewing guns vs. viewing tools. We calculated this by first measuring how different (dissimilar) each response pattern was from the typical gun pattern and from the typical tool pattern, using Pearson distance. We then calculated a relative measure of gun-vs-tool dissimilarity by calculating a difference score (gun dissimilarity – tool dissimilarity) between how similar/dissimilar a condition’s response pattern in the weapon or control identification task was to the pattern typically observed for guns (in the localizer) vs. how similar/dissimilar the condition’s response pattern was to the pattern typically observed for tools (in the localizer). A smaller value on this measure indicates that the neural pattern was more similar to the typical response to guns than to tools. This relative gun-vs-tool dissimilarity measure was used because neural response patterns to guns and to tools would be expected to differ between the initial localizer and subsequent identification tasks in an absolute sense (e.g., the absence/presence of a prime, the new task context involving race, one-back vs. categorization processing goals), and we aimed to isolate the extent to which objects in the identification tasks evoke response patterns relatively more similar to guns than tools independent of such absolute differences. Any scan runs with excessive head movement (>3 mm in any direction) were excluded from analysis, resulting in the exclusion of one participant’s control identification runs.

Response time delays in identifying black-primed tools

In behavioral studies of the weapon bias task, participants tend to show delayed response times when categorizing Black-primed tools as tools (rather than guns) and/or tend to misidentify Black-primed tools as guns, depending on methodological factors9. Given the timing of our adapted fMRI version of the task, we expected to observe a high level of accuracy and racial bias only in response times. Indeed, participants showed a high level of accuracy in the weapon identification task (M = 95.80%) and in the control identification task (M = 94.87%) (Supplementary Fig. 1). For analyses of response times, incorrect trials were discarded. Due to button box malfunction, one participant’s responses were not recorded during the scans, resulting in a loss of behavioral data for this participant.

A target (gun, tool) × race (Black, White) repeated-measures ANOVA on response times revealed the predicted weapon bias, with a significant target × race interaction, F1,29 = 6.54, p = 0.016, η²partial = .18, 95% CI [.03, 0.39] (Supplementary Fig. 2). Participants were slower to correctly identify tools when they were preceded by a Black face (M = 540 ms) relative to a White face (M = 528 ms), t29 = 3.46, p = 0.002, d = 0.64, 95% CI [0.24, 1.03]. Response times did not statistically differ in correctly identifying guns preceded by Black (M = 507 ms) relative to White faces (M = 513 ms), t29 = –1.40, p = 0.172. Participants were also overall faster to identify guns than tools, with a significant main effect of target, F1,29 = 20.64, p < 0.001, η²partial = 0.42, 95% CI [0.14, 0.60]. The main effect of race was not significant, F1,29 = 2.39, p = 0.133. This pattern replicates prior studies2,5 and indicates that our participants experienced the hypothesized delay in identifying an image of a tool as a tool, rather than a gun, when presented in the context of a Black person.

To demonstrate the robustness and generalizability of these behavioral effects, we conducted two behavioral replication studies with larger and more diverse samples, including Black participants, and a wider set of Black and White face stimuli and object stimuli (see Supplementary Information for full details). In both studies, we used identical timing and experimental parameters as our fMRI weapon identification task. In the first replication study (N = 200), we successfully reproduced the weapon bias effect. A 2 × 2 repeated-measures ANOVA revealed the predicted prime × target interaction, F1,199 = 12.90, p < 0.001, η²partial = 0.06, 95% CI [0.01, 0.13], where participants showed significantly slower responses to tools when primed with Black faces compared to White faces, t199 = 3.35, p < .001, d = 0.24, 95% CI = [0.10, 0.38]. To examine potential moderation by participant race, race was coded as White, Black, or Another Race and included as a between-subject factor in a mixed-model ANOVA. The three-way interaction between prime, target, and participant race was not significant, F2,197 = 0.14, p = 0.872, suggesting that the weapon bias effect did not significantly vary across participant racial groups.

In a preregistered direct replication (N = 222; preregistered at https://osf.io/4wsvn on 2024/10/15) powered to detect such a three-way interaction with participant race, a 2 × 2 repeated-measures ANOVA again revealed a significant prime × target interaction, F1,221 = 24.95, p < .001, η²partial = 0.10, 95% CI [0.04, 0.18], with tools categorized more slowly when preceded by Black compared to White faces, t221 = 4.54, p < .001, d = 0.30, 95% CI = [0.17, 0.44]. When including participant race as a between-subject factor, a mixed-model ANOVA again showed no statistically significant three-way interaction, F2,219 = 0.98, p = 0.376, suggesting that the observed racial bias in object recognition did not significantly vary across participant racial groups. Together, these results demonstrate that the weapon bias effect produced in our fMRI sample is robust and generalizable across larger and more diverse samples, stimulus sets, and participant racial groups. Full details and results of the replication studies are provided in the Supplementary Information.

Representational bias in object-sensitive cortex and right ventral-temporal cortex

To examine the impact of the racial context on visual representations of objects, we conducted multiple converging analyses. First, we tested for bias in response patterns within neural regions that were responsive during object presentations, providing an inclusive form of voxel feature selection. A whole-brain univariate contrast of objects > fixation from the weapon identification task was used to identify neural regions sensitive to the processing of gun and tool images, which was threshold-free cluster enhancement (TFCE) corrected (p < 0.05). As shown in Fig. 2a, this analysis yielded swaths of the ventral temporal cortex, bilaterally, including regions involved in the recognition of objects and graspable stimuli in particular (e.g., tools and guns), such as the LOC, FG, and pMTG (left: x = –44.5, y = –78.5, z = –6.5, 11,332 voxels, mean t = 4.94; right: x = 41.5, y = –72.5, z = –10.5, 10,116 voxels, mean t = 5.06)25,26. Based on data from the pattern localizer task (without any racial context), a support vector machine (SVM) classification analysis using leave-one-out cross-validation found that multivariate patterns in these object-sensitive regions reliably distinguished between gun and tool images. The classifier achieved a mean accuracy of 69.35% (SE = 4.96%) across participants, which was significantly above chance level (50%), W = 285, p < 0.001 (one-sample Wilcoxon signed-rank test). A repeated-measures ANOVA on neural response patterns in object-sensitive cortex found a significant main effect of target, such that gun images in the weapon identification task elicited response patterns more similar to independent guns than tools (and vice-versa for tool images), F1,30 = 17.17, p <0.001, η²partial = .36, 95% CI [0.10, 0.56], corroborating these regions’ sensitivity to object category. More importantly, there was a significant target × race interaction, F1,30 = 5.33, p = 0.028, η²partial = 0.15, 95% CI [0.00, 0.37], which as predicted arose because response patterns to tool images were biased toward the gun category when preceded by a Black relative to a White face, t30 = –2.82, p = .008, d = –0.51, 95% CI [–0.88, –0.13], whereas no statistically significant bias was observed for gun images, t30 = –0.07, p = 0.944 (Fig. 2b). The analogous target × race interaction in the control identification task was not significant, F1,29 = 0.03, p = .868, nor was a main effect of race, F1,29 = 1.93, p = .175 (Supplementary Fig. 3), indicating a lack of any statistical evidence to suggest that the biasing effect was due to carryover of the facial prime.

Fig. 2. Biased response patterns in object-sensitive cortex.

Fig. 2

a Object-sensitive cortex, i.e., regions responsive to object stimuli (objects > baseline) surviving TFCE correction at p < 0.05, two-tailed, served as an inclusive map for feature selection (left hemisphere: 11,332 voxels, mean t = 4.94; right hemisphere: 10,116 voxels, mean t = 5.06). b Weapon dissimilarity (pattern dissimilarity to independent guns relative to independent tools) is plotted as a function of prime and target conditions, where smaller values indicate greater pattern similarity to independent guns. Black primes biased neural response patterns for tool images toward independent guns. Individual dots denote individual participants’ data points (n = 31). Cutoffs in violin plots represent minimum and maximum values in each condition; center lines show median, box bounds show 25th–75th percentiles. Error bars denote mean ± within-subject-corrected SEM57. c The magnitude of bias in these neural response patterns was associated with participants’ response-time delay in categorizing Black-primed tools as tools. Individual dots denote individual participants’ data points (n = 30). The shaded area represents a 95% confidence interval. TFCE=threshold-free cluster enhancement.

To the extent that such neural representational biases played a role in participants’ behavioral bias during the same task (i.e., their delay in identifying Black-primed tools as tools, rather than guns), the behavioral and neural effects should be positively correlated. For each participant, a standardized response-time difference score between Black-primed tools and White-primed tools was calculated, indexing the participant’s delay in categorizing Black-primed tools as tools. Indeed, the response-time delay in categorizing Black-primed tools was significantly related to Black-primed tools’ increased neural-pattern similarity to “gun” in object-sensitive regions (Fig. 2c; Spearman ρ = 0.48, p = 0.004, 95% CI = [0.15, 0.71]).

These results were corroborated with a whole-brain searchlight analysis to directly identify any regions whose response patterns demonstrated the a priori predicted bias of Black primes on tool images (p < 0.05, TFCE corrected). This analysis revealed an extensive region of the right ventral temporal cortex, including the LOC, FG, and pMTG (Fig. 3a; x = 53.5, y = –48.5, z = –30.5, 7652 voxels, mean t = 3.02); no other regions survived correction. Neural response patterns in this region again showed the predicted target × race interaction, with response patterns to tool images more closely approximating gun patterns when preceded by a Black relative to a White face (t30 = –3.51, p = 0.001, d = –0.63, 95% CI = [–1.02, –0.24]) while no statistically significant difference was observed for gun images (t30 = – 0.46, p = .647), F1,30 = 4.85, p = 0.035, η²partial = .14, 95% CI [0.00, 0.36] (Fig. 3b). The analogous interaction in the control identification task was not statistically significant, F1,29 = 0.01, p = 0.904, nor was a main effect of race, F1,29 = 0.02, p = 0.891 (Supplementary Fig. 4), indicating a lack of any statistical evidence to suggest the biasing effect was due to carryover of the facial prime. Finally, the response-time delay in categorizing Black-primed tools was again related to Black-primed tools’ increased neural-pattern similarity to the gun category in this region (Fig. 3c; Spearman ρ = 0.45, p = 0.008, 95% CI = [0.11, 0.69]). See Supplementary Information for full details.

Fig. 3. Whole-brain searchlight analysis results.

Fig. 3

a An extensive region of the right ventral temporal cortex (p < .05, TFCE corrected, two-tailed; 7652 voxels, mean t = 3.02), including the LOC, FG, and pMTG, whose local response patterns demonstrated the predicted bias of Black primes on tool images. b Weapon dissimilarity (pattern dissimilarity to independent guns relative to independent tools) is plotted as a function of prime and target conditions, where smaller values indicate greater pattern similarity to independent guns. Black primes biased neural response patterns for tool images toward independent guns. Individual dots denote individual participants’ data points (n = 31). Cutoffs in violin plots represent minimum and maximum values in each condition; center lines show median, box bounds show 25th-75th percentiles. Error bars denote mean ± within-subject-corrected SEM57. c The magnitude of bias in these neural response patterns was associated with participants’ response-time delay in categorizing Black-primed tools as tools. Individual dots denote individual participants’ data points (n = 30). The shaded area represents 95% confidence interval. TFCE=threshold-free cluster enhancement. LOC=Lateral occipital complex. FG=Fusiform gyrus. pMTG=Posterior middle temporal gyrus.

Testing the effects of visual confounds

A potential concern might be that the biasing effect of Black primes on tool images toward ‘gun’ is explained by an intrinsic visual similarity between Black face primes and gun targets. Such physical confounds due to hemodynamic lag from the preceding face stimuli would have manifested in the control identification runs, as well as in increased pattern similarity for Black-primed guns relative to White-primed guns in the weapon identification runs, none of which was observed. Moreover, as discussed earlier, the target gun and tool images were equated on luminance and contrast, and a computational image recognition model (ResNet50) found that the Black and White faces were equally similar to the gun vs. tool images in both higher-level and lower-level visual features. Nevertheless, to eliminate the possibility that some featural overlap between each Black face prime on any given trial and the independent gun images spuriously produced the pattern of findings, we conducted a trial-by-trial analysis to control for visual similarity.

For every tool trial in the weapon identification task, we estimated the face prime’s average visual similarity to all gun images and its average visual similarity to all tool images in the pattern localizer task, creating an index of visually-based weapon similarity/dissimilarity (mean rguns – mean rtools) both for higher-level visual features (ResNet50 final layer, i.e., layer 50) and lower-level visual features (ResNet50 early layer 10) (see Methods). A new design matrix was used to estimate neural responses to individual tool and gun targets in the weapon identification task on a trial-by-trial basis (see Methods). As in the primary analyses, weapon similarity/dissimilarity (i.e., dissimilarity to the average independent gun pattern relative to the average independent tool pattern) was again calculated for these pattern estimates of individual tool trials. These values were then submitted to multi-level regression models (linear mixed-effects regressions) with predictors of prime condition (Black, White) and the visual covariate, i.e., the visual similarity between the face image on the current trial and the independent guns vs. independent tools.

As expected, without controlling for visual similarity, a Black prime on tool trials led to greater pattern similarity to independent guns (relative to a White prime) in object-sensitive cortex (B = –0.004, SE = 0.001, t = –3.15, p = 0.002, 95% CI [–0.006, –0.001]) and the region of the right ventral temporal cortex revealed by the searchlight analysis (B = –0.004, SE = 0.001, t = –3.07, p = .002, 95% CI [–0.006, –0.001]), replicating the bias observed in the primary analyses now at a trial-by-trial level. Critically, the biasing effect of the prime’s race on tool representation remained consistent after including either visual similarity estimate (ResNet’s final layer or early layer) as a covariate in the model, both for the effect in object-sensitive cortex (final layer: B = –0.005, SE = 0.001, t = –4.15, p < .001, 95% CI [–0.008, –0.003]; early layer: B = –0.004, SE = 0.001, t = –3.15, p = .002, 95% CI [–0.006, –0.001]) and in the right ventral temporal cortex region (final layer: B = –0.005, SE = 0.001, t = –3.52, p < 0.001, 95% CI [–0.007, –0.002]; early layer: B = –0.004, SE = 0.001, t = –3.07, p = .002, 95% CI [–0.006, –0.001]). The results remained unchanged when treating all images as grayscale as well, both for object-sensitive cortex (final layer: B = –0.004, SE = 0.001, t = –3.19, p = 0.001, 95% CI [–0.006, –0.001]; early layer: B = –0.003, SE = 0.001, t = –2.68, p =0.007, 95% CI [–0.006, –0.001]) and the right ventral temporal cortex region (final layer: B = –0.004, SE = 0.001, t = –2.89, p = 0.004, 95% CI [–0.006, –0.001]; early layer: B = –0.004, SE = 0.001, t = –2.68, p = 0.007, 95% CI [–0.006, –0.001]). 

To further rule out potential confounds, we conducted additional analyses examining “baseline” similarities in neural response patterns between object stimuli (in the pattern localizer task) and face stimuli (in the control identification task) when race was not relevant to the task. A 20 object images (localizer) × 20 face images (control identification task) matrix captured neural pattern correlations between all individual face and object stimuli (outside a racial context). These were subjected to an F-test with permutation-based significance testing to assess whether the observed differences in Black-gun, Black-tool, White-gun, and White-tool correlations were larger than expected by chance under the null hypothesis of no differences by condition pairings (see Methods). The F-test indicated no significant differences in mean neural-pattern correlations among the four condition pairings in either object-sensitive cortex (p = .687) or the right ventral temporal cortex region (p = .569). We therefore did not detect any statistical evidence to suggest there were inherent differences in “baseline” neural-pattern similarities between face and object stimuli (outside a racial context) that could spuriously account for the findings.

Another concern may be that the gun or tool images elicited more homogeneous neural response patterns within their respective object category (e.g., gun response patterns were more similar to one another than were tool response patterns’ similarity to one another), introducing potential confounds. However, using data from the pattern localizer task, permutation testing showed that gun images’ pairwise within-category similarities did not statistically differ from those of tool images in either object-sensitive cortex (p = .363) or the right ventral-temporal cortex region (p = .453), suggesting that within-category variation for guns and within-category variation for tools were not statistically different (see Methods).

Together, these analyses rule out the possibility that a Black prime’s effect on tool images was spuriously produced by some kind of intrinsic visual or featural similarity between the Black faces and gun images used or some other confound in image-evoked response patterns.

Discussion

We provide evidence for an alteration in neural representations of objects when encountered in the context of Black or White faces. The neural response pattern of tool images became more similar to an independent gun pattern in the context of a Black relative to a White individual. This representational bias was observed in response patterns of the ventral temporal cortex involved in the visual processing of graspable objects, including the LOC, FG, and pMTG. Moreover, the extent of participants’ neural representational bias in these regions was associated with a corresponding cognitive delay in identifying Black-primed tools as tools (rather than guns), together suggesting a more reflexively activated gun representation in the context of Black-primed tools. This response-time delay in identifying Black-primed tools due to racial bias was replicated in two larger behavioral samples (total n = 422) with a wider range of stimuli, and we found no statistically significant variation across participant racial groups (see Supplementary Information for details), consistent with previous research6. Crucially, the control identification runs that presented identical Black face primes found no statistically significant effects, and it was the unique combination of Black faces and tool images during weapon identification runs that led to a biased gun representation. Moreover, target stimuli were equated on both low-level and high-level visual features, and sensitive analyses that controlled for visual similarity demonstrated the robustness of Black face primes’ distortion of tool images. Together, the findings demonstrate the impact of racial bias on temporal cortical representations of objects in ways that conform to one’s preconceived associations, which cannot be explained by mere physical confounds.

In our study, multiple analyses demonstrated that perceiving a tool image in the context of a Black person shifted neural response patterns in regions implicated in the visual representation of objects. Such top-down impact of social-conceptual associations on ventral-visual representation is consistent with a large body of work demonstrating the presence of contextual and semantic information about objects within numerous ventral-visual regions3138. In the social domain, stereotypes and other social-conceptual processes have previously been shown to affect FG face representations27,28. Such research has shown that the structure of perceptual face representations is correlated with the structure of conceptual stereotype representations in ventral-visual regions such as the FG. Here we extend such social cognitive effects to object representations, further suggesting that social-conceptual influences on visual perception are supported by the same domain-general mechanisms as other top-down effects on perception21. We also bolster the evidence for these top-down effects by way of the sequential priming paradigm. By demonstrating that the social context in which a stimulus is presented biases its visual processing, the current findings show that identical images can be represented in different ways in ventral-visual regions depending on their stereotypical association with the current context (e.g., a Black or White person). These results suggest that the top-down biasing effects of stereotypes on ventral-visual representations can be transient in nature and instantiated in real time, rather than inertly present within the perceptual system due to long-term learning. This is consistent with other recent findings showing that the effect of stereotypes on ventral-visual representations can be transiently “de-biased” using backward masking29.

While previous behavioral studies on weapon bias have largely favored interpretations based on response facilitation and failure to control one’s bias9, our findings provide a different explanation based on the transient alteration in an object’s visual representation. Interestingly, early research on weapon bias considered the possibility of a “fleeting illusion” operating at a perceptual level when misidentifying Black-primed tools as guns (or partially activating “gun” when identifying Black-primed tools as tools, leading to delayed response times)9,10. But while studies at the time used confidence judgments and related measures to rule out more enduring forms of a genuine “illusion”, researchers acknowledged that behavioral data alone could not rule out a fleeting form of top-down perceptual distortion8,10. The weapon bias literature generally moved on to focus on response-level issues such as self-control failure. Here, using neural decoding, our findings provide support for this early-articulated hypothesis of a fleeting perceptual bias, which we show to manifest across the ventral temporal cortex and to predict classic weapon bias effects. However, it is important to note that these explanations are by no means mutually exclusive. Both a transient perceptual bias and the traditionally studied post-perceptual biases (related to control processes that operate at a response level) likely co-exist in accounting for weapon bias effects, and the evidence for response-level priming in weapon bias is robust and long established2,8,9. Future research could build on the present findings to map the specific interactions between these perceptual and post-perceptual processes in the context of weapon bias effects.

Racially biased performance in the weapon identification task predicts similar performance in the first-person shooter bias task7, where participants are presented with full-scene images of White and Black men holding guns or non-gun objects and instructed to virtually “shoot” only armed targets6. Participants in this task tend to disproportionately shoot unarmed Black targets or show delayed response times when deciding not to shoot these targets, suggesting a prepotent tendency to shoot unarmed Black targets9. Various individual differences predict the magnitude of racial bias in the weapon and shooter bias tasks, including anti-Black attitudes and specific stereotype associations between Black people and danger and between Black people and weapons2,6,8,39. Some studies have demonstrated a causal role of these stereotype associations on racial bias in these tasks40. While the present study used a task designed to detect bias in response times, weapon bias tasks using more speeded response windows commonly demonstrate that participants will misidentify Black-primed tools as guns as well. Thus, the neural basis of the bias in response times observed here would be expected to generalize to bias in misidentifications (or “shoot” decisions in the shooter bias task), as commonly observed in more speeded versions of these tasks. In the real world, police officers routinely encounter stressful and ambiguous scenarios that require rapid decision-making, and samples of police officers often exhibit racial bias in these same tasks9. The disproportionate shooting of Black people by police officers is larger in U.S. cities where implicit anti-Black attitudes and the association between Black people and weapons are stronger41, factors predictive of biased responding in the weapon bias task. Thus, it is plausible that the biasing effect of racial context on ventral-visual representations of objects documented here may play a role in police officers’ real-world encounters with racial minorities, but further research is needed.

This work is not without its limitations. While we demonstrated these biases using rapid object identification under controlled conditions, future work should examine how such perceptual distortions manifest in more naturalistic viewing conditions. Studies using neural decoding approaches with higher temporal resolution, such as EEG or MEG, could better characterize the temporal dynamics of these representational biases. Moreover, while our focus on Black-White racial contexts in the US is sensible given specific stereotypes linking Black individuals to crime and weapons, it is important for future research to explore whether similar representational biases in ventral temporal cortex help mediate other socially biased effects on object identification, such as those involving a broader set of racial groups as well as gender, socioeconomic status, and other dimensions.

A natural question is how these biases can be reduced or eliminated, and the current findings may have implications for possible interventions. As various kinds of evaluative and stereotypical associations appear to drive weapon bias effects, interventions could aim to modify these underlying associations in order to shift weapon bias. While single-shot, lab-based interventions exhibit strong, but highly transient, reductions in implicit racial bias42, recent work using extensive, multi-week “habit-breaking” interventions have shown long-lasting effects, albeit with mixed results43. Yet effects on weapon or shooter bias have not been tested, to our knowledge. Structural and institutional interventions have also been proposed to address racial disparities in policing9,44. However, if racial bias exerts collateral effects on ventral-visual representations as perceivers navigate the world around them, the current findings also raise the possibility of visually based interventions that could target such representations.

Since experience and familiarity are known to modify ventral-visual representations of objects45, there are multiple approaches through which interventions could target such representations. The ventral-visual stream, including object processing regions such as the LOC, are highly sensitive to statistical learning processes and the probability that specific types of objects will appear based on the preceding context46. Thus, repeated exposure to counter-stereotypical pairings (e.g., Black men with non-weapon stimuli) could prime the perceptual system and recalibrate ventral-visual representations. Interestingly, adaptation techniques relying on high-level visual aftereffects could also take the opposite approach. Adaptation involves prolonged exposure to a specific stimulus category, leading to a subsequent reduction in the associated neural population’s sensitivity to the category due to desensitization47. Accordingly, prolonged and sustained exposure to weapon stimuli in the context of Black faces could, in theory, lead such visual representations of weapons to become less pronounced, allowing other percepts (e.g., tool-like features) to dominate. It is conceivable that such counter-stereotypical priming or visual adaptation techniques, when implemented in a chronic fashion over multiple sessions, could “recalibrate” ventral-visual representations and therefore be leveraged to reduce visually based forms of social bias like the effects documented here48. However, these possibilities are only speculative and would need to be directly tested. Overall, this work points to additional perceptually driven pathways that may have the power to reduce the biasing effects of one’s preconceived notions and racial stereotypes on object identification.

Methods

This research complies with all relevant ethical regulations. The fMRI study was approved by the University Committee on Activities Involving Human Subjects at New York University, and behavioral replications were approved by Columbia University’s Institutional Review Board. Participants were recruited via advertisements in the university community and surrounding area (fMRI) or on Prolific (behavioral replications). While convenience, self-selecting sampling may limit generalizability, the weapon identification bias has been robustly demonstrated across diverse populations and recruitment methods in past work, suggesting minimal impact on our core findings. All participants provided written informed consent prior to participation and received monetary compensation for participation.

Participants

We recruited a convenience sample of 32 participants from the New York City area who received monetary compensation for completing the study. Participants were recruited through posted advertisements in the New York University community and the surrounding area. Eligibility was determined through an online screening questionnaire. Participants were right-handed, had no history of neurological disease, were native English speakers, were 18-40 years of age, and had normal or corrected-to-normal vision. As the study concerned Black-related stereotypes, to avoid in-group/out-group confounds we recruited participants who did not identify as Black. No statistical methods were used to predetermine sample size. However, the target sample size (n = 30) was based on prior fMRI experiments of stereotype-driven effects on visual perception27. One participant was excluded from analysis due to excessive head movement, resulting in a total n = 31 (M age = 23.10 years, SD age= 5.21 years; 22 Asian, 1 Hispanic, 7 White, 1 Multiracial; 21 female, 10 male). No other participants were excluded from the analyses. Participant race/ethnicity and gender were self-reported. Gender was not analyzed as a variable because the fMRI study was not focused on gender bias and was not powered for gender comparisons.

This research used race-related stimuli and participant demographic data to study cognitive and neural bias. Participants’ race/ethnicity groupings in the behavioral replication studies were based on participants’ self-identification and reflect commonly used social categories. The study was not designed to make claims about any inherent traits of these groups. Face stimuli were used as visual primes to investigate the impact of stereotype activation on perception, and care was taken to statistically control for low-level visual properties of stimuli to avoid confounding effects. Results are interpreted within the context of social perception and learned social bias, and efforts were made to minimize potential harm or stigmatization of any group.

Stimuli

All stimuli used are available on the Open Science Framework (OSF) (https://osf.io/j4n5w/). For face prime stimuli, we used stimuli from prior work by Payne, Burkley, and Stokes49, which have been shown to induce weapon bias effects. These original stimuli consisted of color images of 12 Black and 12 White male faces (available at https://bkpayne.web.unc.edu/research-materials). Among these, we randomly selected 10 Black and 10 White faces, which were directly oriented and had identical size and resolution. Faces had a resting gesture, were cropped to remove hair cues, and did not have any visible makeup or tattoos. Object target stimuli consisted of 10 guns and 10 tools obtained from publicly accessible websites, which were cropped onto a plain background and standardized by size and resolution. Object orientation was random. For the control target stimuli, we used leftward and rightward greyscale patterns (‘\\\’ or ‘///’) at different diagonal orientations. For the forward- and backward-masking image, a random noise pattern was used.

Object target stimuli and control target stimuli were greyscaled and equated for contrast and luminance across individual images using SHINE toolbox50, along with the masking image. The Black and White faces were kept in their original form as unnatural alterations to Black and White faces may artificially mitigate the activation of racial stereotypes. Consistent with this approach of preserving naturalistic racial cues, we did not equate faces on normed ratings of social dimensions (e.g., attractiveness, trustworthiness, dominance) as such ratings are similarly affected by racial stereotypes51 and standardization may therefore further mitigate the activation of racial stereotypes.

In addition, we matched the Black and White face stimuli on any potential featural similarity to the gun images using ResNet50, a deep convolutional neural network widely used for image recognition that has been shown to be an excellent account of the visual similarity structure of objects and faces across the ventral-visual stream30. We used custom-made Python scripts to extract vector representations for every face prime in the weapon identification task and all gun and tool stimuli from the pattern localizer task in both higher-level visual features (ResNet50 final layer, i.e., layer 50) and lower-level visual features (ResNet50 layer 10). The average visual similarity between each face prime and all gun stimuli and between each face prime and all tool stimuli were then calculated using Pearson correlation to create an index of relative visual similarity to guns (mean rguns – mean rtools). To evaluate whether Black and White faces differed in their relative similarity to guns, we used a permutation test to generate a null distribution by randomly shuffling the Black and White labels for all face images and recalculating the mean difference in gun similarity. This process was repeated 10,000 times to create a distribution of mean differences expected under the null hypothesis of no true difference between Black and White faces’ gun similarity. The p value was then calculated as the proportion of shuffled differences that were as extreme or more extreme than the observed difference. This test showed that Black and White faces were equally similar to the gun images (relative to the tool images).

Procedure

Each participant took part in three fMRI tasks: (1) pattern localizer, (2) weapon identification, and (3) control identification. All scans entailed a rapid event-related design, and all event sequences were determined by optseq2 to allow for optimal deconvolution of hemodynamic responses52. Scans were designed to extract neural patterns representing different stimulus conditions. Because the sluggish nature of the BOLD response makes potential overlap in estimated neural responses to primes and targets likely unavoidable to some degree, the control identification condition was included in order to isolate the effects of unique prime-target combinations rather than possible carryover from facial primes.

Pattern localizer

To obtain neural response patterns for each object category (gun or tool) in isolation without preceding Black or White face primes (i.e., without any racial context), prior to the weapon and control identification tasks, participants completed two runs of a localizer scan (Fig. 1, Top). This localizer task used the same set of gun and tool images (10 guns, 10 tools) as the weapon identification task, but presented them without any racial primes to establish “baseline” neural patterns for each object category without any racial priming. Localizer runs involved a 1-back task to maintain participant attention. Participants were asked to press a button when they saw the same image repeated consecutively across two trials. On each trial, participants saw a fixation cross for 100 ms, a forward-masking image for 1000 ms, an object image for 200 ms, followed by a backward-masking image for 100 ms, a blank screen for 100 ms, and finally a jittered ISI varying from 1500 to 6000 ms (interval = 1500 ms), i.e., 1 to 4 TRs. This preserved a similar structure as the weapon and control identification tasks, detailed next, to make them as comparable as possible to the target stimuli in the identification tasks (i.e., same timing and inclusion of the masking images, just lacking any prime stimulus). For each participant, 80 trials were presented during a single run (40 gun and 40 tool trials, 10 stimuli per object category × 4 repetitions).

Weapon identification

On each trial, participants viewed a fixation cross for 300 ms, a forward-masking image for 1000 ms, a face prime for 200 ms (a Black or White face), and then an object target for 200 ms (gun or tool), followed by a backward-masking image for 100 ms, a blank screen for 1200 ms, and finally a jittered interstimulus interval (ISI) varying from 1500 to 6000 ms (interval = 1500 ms), i.e., 1 to 4 TRs (Fig. 1, Middle). For each trial, events were designed such that the presentations of the prime and target occurred in two separate TRs. The prime presentation ended at the end of one TR, and the target presentation started at the beginning of the following TR upon termination of prime presentation, so as to separate the neural response patterns for primes vs. targets. For each participant, 80 trials were presented during a single run (2 target categories × 2 race categories × 20 repetitions), with participants completing six runs. Participants were instructed to respond with either a ‘gun’ or ‘tool’ response on the button box held in their right hand, using the index vs. middle finger, as quickly and accurately as possible. The mapping between the button and the responses was counterbalanced across participants.

Control identification

This task followed the event structure of the weapon identification task, with the only difference being that the target images in the control task were diagonal line images (Fig. 1, Bottom), rather than object images. As in the weapon identification task, participants were instructed to respond as quickly and accurately as possible on the basis of the target category: the leftward vs. rightward orientation of the diagonal stripes (‘\\\’ or ‘///’). Again, the button-response mapping was counterbalanced across participants. Participants completed two runs of the control identification task.

FMRI data acquisition parameters

Participants were scanned using a 3 T Siemens Allegra head-only scanner at the Center for Brain Imaging at New York University. Anatomical images were acquired using a T1-weighted protocol 3D MPRAGE T1-weighted sequence with the following parameters: 2300 ms repetition time (TR); 2.32 ms echo time (TE); 0.9 mm3 voxel size; 230 mm field of view (FOV); 192 slices with no gap; anterior–posterior phase encoding direction. Functional images were acquired using a multiband echo-planar imaging (EPI) sequence with the following parameters: 1500 ms TR; 35 ms TE; 2 mm3 voxel size; 208 mm FOV; 68 slices with no gap; anterior–posterior phase encoding direction and multiband acceleration factor of 4. Gradient spin-echo fieldmaps were acquired in both the anterior–posterior and posterior–anterior phase encoding directions for use in correcting for potential susceptibility artifacts.

FMRI data preprocessing

To preprocess BOLD images, we used fMRIPrep 20.2.1. The T1-weighted (T1w) image was corrected for intensity non-uniformity with N4BiasFieldCorrection (ANTs 2.3.3). The T1w-reference was skull-stripped with a Nipype implementation of antsBrainExtraction from (ANTs 2.3.3). Segmentation of cerebrospinal fluid (CSF), white-matter (WM), and gray-matter (GM) was performed on the brain-extracted T1w using FAST (FSL 5.0.9). Volume-based spatial normalization to a standard MNI space was performed through nonlinear registration with antsRegistration (ANTs 2.3.3). For each functional BOLD image per participant, we performed the following via fMRIPrep. First, a reference volume and its skull-stripped version were generated. A B0-nonuniformity map was estimated based on EPI references with opposing phase-encoding directions with 3dQwarp from AFNI. Based on the estimated susceptibility distortion, a corrected EPI reference was calculated for a more accurate co-registration with the anatomical reference. The BOLD reference was then co-registered to the T1w reference using FLIRT (FSL 5.0.9). Head-motion parameters with respect to the BOLD reference were estimated using MCFLIRT (FSL 5.0.9) before any spatiotemporal filtering. BOLD runs were slice-time corrected using 3dTshift from AFNI. The BOLD time-series were resampled onto their original native space to correct for head-motion and susceptibility distortions, and then were resampled into standard space, generating a preprocessed BOLD run in MNI space.

Voxelwise condition estimates

To test whether neural representations of objects were affected by racial cues temporally preceding the object, we compared the similarity of neural response patterns to Black- and White-primed tool and gun images (weapon identification task) with independent response patterns to tool and gun images presented in isolation (pattern localizer task). For each participant, we estimated the average hemodynamic response per voxel for each event type using GLM via 3dDeconvolve in AFNI. The design matrix in the localizer task consisted of two conditions of interest: guns and tools. The design matrix in the weapon and control identification tasks each consisted of four conditions of interest: Black-primed guns, White-primed guns, Black-primed tools, and Black-primed tools (weapon task) or Black-primed ‘\\\’ images, White-primed ‘\\\’ images, Black-primed ‘///’ images, and White-primed tools ‘///’ images (control task). In the separate GLMs for the three tasks, all predictors of interest were modeled as boxcar functions across the TR duration of the target stimulus presentation (1,500 ms). Thus, the onset of the predictors was always the presentation of the tool or gun image, with the prime stimulus encompassed by a separate TR. The boxcar functions were convolved with a gamma variate function (GAM in AFNI). Nine confounding variables were included in each design matrix as nuance regressors: the CSF, global signal, the WM, and six motion parameters. For each condition, we averaged the resulting voxelwise t statistics across runs.

Defining object-sensitive voxels

For group-level analyses on univariate data, we applied 6-mm FWHM spatial smoothing. To identify brain regions sensitive to the visual processing of objects, we conducted a whole-brain univariate guns > fixation contrast and tools > fixation contrast from the independent guns and tools during the pattern localizer task, each of which was corrected using TFCE53 via the FSL randomise function (5000 permutations). The overlapping area between guns and tools (p < .05, corrected) served as object-sensitive voxels and an inclusive region for voxel feature selection to be used in subsequent MVPA (Fig. 2a).

Whole-brain searchlight analysis

To identify neural regions whose response patterns demonstrated the predicted bias, whereby response patterns to tool images are distorted toward the gun category by Black face primes (relative to White face primes), we conducted a whole-brain searchlight analysis (searchlight radius = 20 voxels).

Gun vs. tool classification analyses

We tested whether multi-voxel patterns in object-sensitive cortex exhibited a reliable separability of gun vs. tool categories using a linear support vector machine (SVM) classification analysis. The analysis was performed using PyMVPA’s LinearCSVMC classifier based on LIBSVM with default parameters, taking all voxels within the object-sensitive ROI as input features. For cross-validation purposes, we evenly divided each of the two preprocessed localizer scans into three sections, resulting in six separate localizer sections. Each section was submitted to a GLM to extract estimates for gun and tool pattern indices. We then used these gun and tool indices from all six run sections of the pattern localizer task for classification and cross-validated using a leave-one-out scheme (i.e., results from five run sections were tested using the left-out run section). Mean classification accuracy was submitted at the group level to a one-sample Wilcoxon signed-rank test against chance level accuracy (0.5).

Pattern similarity analyses

We calculated a relative weapon dissimilarity measure for each condition in the weapon and control identification tasks, indexing the degree to which the condition’s response patterns approximated those of independent gun and tool patterns from the localizer task. Across all voxels within a given analysis, we calculated for each condition an index of dissimilarity to the independent gun pattern relative to dissimilarity to the independent tool pattern (dissimilaritygun – dissimilaritytool). Each dissimilarity index was quantified as a Fisher-z-transformed Pearson correlation distance (1 – r). As such, lower values in this weapon dissimilarity measure indicate greater similarity to independent gun patterns, adjusted for similarity to independent tool patterns. Because neural response patterns to guns and to tools in the initial localizer and subsequent identification tasks differ in an absolute sense to some degree due to various methodological differences of the tasks, this relative gun-vs-tool dissimilarity measure helps isolate the representational bias of interest. At the group level, this relative weapon dissimilarity measure in each condition was submitted to ANOVAs and/or t-tests depending on the analysis. We did not formally test for normality or homogeneity of variance in our data. However, our primary analyses relied on standard parametric tests and permutation-based methods, which are robust to modest violations of these assumptions. All reported tests were two-tailed unless otherwise specified.

Response time analyses

To test for racial biases in participants’ object identification, we analyzed participants’ response times with a repeated-measures ANOVA. Prior to analysis, response times were log-transformed due to a strongly positively skewed distribution2,54. To quantify each participant’s racial bias in responses, we calculated a standardized response-time difference score: [Black-primed tool – White-primed tool] – [Black-primed gun – White-primed gun]. Higher values in this score therefore indicate cognitive delays in categorizing Black-primed tools relative to White-primed tools, adjusted by analogous response times to the gun stimuli.

To test for the relationship between this index of participants’ response time delays in categorizing Black-primed tools and the neural representational biases demonstrated in pattern similarity (Black-primed tool patterns approximating “gun”), we calculated a difference score per participant using the relative weapon dissimilarity measure, such that positive values indicate greater similarity to “gun” and negative values indicate greater similarity to “tool”. Thus, a higher value in this difference score indicates Black-primed tools’ increased neural-pattern similarity to “gun” relative to White-primed tools. Spearman rank correlations were used so as not to assume a linear relationship in similarity patterns between behavioral and neural data55. For correlations between the behavioral and neural indices, we used a one-tailed test, consistent with our directional hypothesis.

Analyses controlling for visual similarity

A new GLM design matrix was constructed to estimate response patterns to the individual gun and tool stimuli during the weapon identification task. This design matrix included predictors to model each of the 20 individual stimuli. The identical weapon dissimilarity measure (to independent gun and tool patterns) was used with these individual trial estimates, which were submitted to a linear mixed-effects model that could account for the following visual similarity measures on a sensitive, trial-by-trial basis (estimating the intrinsic physical similarity between each trial’s face prime the independent gun and tool stimuli of the pattern localizer task). In the context of the ResNet50 model described earlier (see Stimuli), the final layer (layer 50) represents higher-level visual features and is best reflected by the ventral-temporal regions we have observed30. We also obtained visual similarity via an early layer (layer 10), which represents lower-level visual features, for completeness. To model visual similarity, we used the index of relative visual similarity to independent guns described earlier (mean rguns – mean rtools) for all face prime images, both in higher-level visual features (ResNet50 final layer) and lower-level visual features (ResNet50 early layer). These trial-by-trial visual similarity estimates were used as covariates in linear mixed-effects models to test the robustness of a Black face prime’s effect on shifting tool representations toward the gun category above and beyond any intrinsic visual similarity between the specific face prime and the independent gun and tool stimuli.

To examine “baseline” similarities in neural response patterns between object stimuli (in the pattern localizer task) and face stimuli (in the control identification task) when race was not relevant to the task, a 20 object images (localizer) × 20 face images (control identification task) correlation matrix captured neural-pattern similarities between specific face and object image pairs outside any racial context. We conducted an F-test with permutation-based significance testing. This analysis evaluates whether the observed differences in mean Black-gun, Black-tool, White-gun, and White-tool correlations are larger than would be expected by chance under the null hypothesis of no differences by condition pairings. To generate a null distribution of F-statistics, we performed 10,000 permutations where rows (face images) and columns (object images) were independently shuffled while preserving the matrix structure, and the F-statistic was recomputed in each permutation. This permutation-based F-test revealed no significant differences in mean neural-pattern correlations across the four condition pairings in either object-sensitive cortex or the right ventral temporal cortex region. This ensured that at "baseline" outside the weapon identification task the Black-gun, White-gun, Black-tool, and White-tool correlations in neural response patterns to individual stimuli were not statistically different.

To additionally ensure that the gun stimuli’s neural response patterns were not any more homogenous than those of the tool stimuli, we used data from the pattern localizer task obtained without any Black or White face primes (outside any racial context) to generate 10 × 10 gun and 10 × 10 tool correlation matrices that captured all pairwise within-category gun similarities and within-category tool similarities. To test whether gun images’ pairwise within-category correlations differed from those of tool images, we used a permutation test by randomly shuffling the gun and tool labels in the correlation matrices across 10,000 iterations to generate a null distribution of differences in gun vs. tool within-category similarity, separately for object-sensitive cortex and the right ventral-temporal cortex region. This showed that within-category variation for the two object categories was statistically equal in both regions.

Behavioral replication studies

To demonstrate robustness and generalizability, we conducted two behavioral replication studies with larger, more diverse samples including Black participants. Both studies used identical timing and experimental parameters as the fMRI weapon identification task, except with a standard 1000 ms inter-trial interval instead of the jittered ITI used for fMRI.

For stimuli, we used 30 Black and 30 White male faces from the Florida Department of Corrections, originally used in prior weapon bias research. These faces offer enhanced ecological validity as they represent individuals law enforcement may encounter. Object stimuli consisted of an expanded set of 12 guns and 12 tools (available at https://osf.io/j4n5w/).

Replication Study 1 recruited 200 participants through Prolific (M age = 37.97 years, SD age = 11.96 years; 123 White, 37 Black, 19 Asian, 3 Native American, 10 Multiracial, 8 Another Race; 121 female, 75 male, 4 other). Replication Study 2 recruited 222 participants through Prolific (M age = 39.79 years, SD age = 12.94 years; 143 White, 38 Black, 21 Asian, 4 Native American, 1 Pacific Islander, 8 Multiracial, 7 Another Race; 133 female, 85 male, 3 other, 1 declined to report) and was preregistered at https://osf.io/4wsvn. The preregistration included power analysis for detecting potential moderation by participant race. There were no deviations from the preregistration.

All procedures were identical across both replications. Participants completed the weapon identification task online, categorizing guns and tools following Black or White face primes. Full methodological details are provided in Supplementary Information.

Reporting summary

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.

Supplementary information

Supplementary Information (499.9KB, pdf)
Reporting Summary (4.6MB, pdf)

Acknowledgements

We thank Roshni Lulla, Gavin Mullin, Keith Sanzenbach, Benjamin Stillerman, David Amodio, Sammy Tavassoli, Pablo Velasco, and Gabriel Fajardo for their help with stimulus preparation, data collection, and data processing. This work was in part funded by the National Science Foundation research grant BCS-1654731 (J.B.F.).

Author contributions

D.O. and J.B.F. conceptualized the study. D.O. and J.B.F. designed the experiments. D.O. and H.I.V. collected data. D.O. analyzed the data with input from J.B.F. D.O. drafted the manuscript, and J.B.F. provided critical revisions. All authors discussed the results and contributed to the final manuscript.

Peer review

Peer review information

Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. A peer review file is available.

Data availability

Data and materials are publicly available via the OSF: https://osf.io/j4n5w (10.17605/osf.io/j4n5w). Due to ethical and privacy considerations, raw data are not publicly posted. Interested researchers may contact the corresponding author to discuss access under appropriate data-sharing agreements.

Code availability

Analysis code is publicly available via the OSF: https://osf.io/j4n5w (10.17605/osf.io/j4n5w).

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

The online version contains supplementary material available at 10.1038/s41467-025-63381-7.

References

  • 1.Swaine J., Laughland O., Lartey J. Black Americans killed by police twice as likely to be unarmed as white people. The Guardian1, (2015).
  • 2.Payne, B. K. Prejudice and perception: The role of automatic and controlled processes in misperceiving a weapon. J. Pers. Soc. Psychol.81, 181–192 (2001). [DOI] [PubMed] [Google Scholar]
  • 3.Rivers, A. M. The Weapons Identification Task: Recommendations for adequately powered research. PLoS One12, e0177857 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Kleiman, T., Hassin, R. R. & Trope, Y. The control-freak mind: Stereotypical biases are eliminated following conflict-activated cognitive control. J. Exp. Psychol. Gen.143, 498 (2014). [DOI] [PubMed] [Google Scholar]
  • 5.Amodio, D. M. et al. Neural signals for the detection of unintentional race bias. Psychol. Sci.15, 88–93 (2004). [DOI] [PubMed] [Google Scholar]
  • 6.Correll, J., Park, B., Judd, C. M. & Wittenbrink, B. The police officer’s dilemma: Using ethnicity to disambiguate potentially threatening individuals. J. Pers. Soc. Psychol.83, 1314–1329 (2002). [PubMed] [Google Scholar]
  • 7.Ito, T. A. et al. Toward a comprehensive understanding of executive cognitive function in implicit racial bias. J. Pers. Soc. Psychol.108, 187–218 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Payne, B. K. Conceptualizing control in social cognition: How executive functioning modulates the expression of automatic stereotyping. J. Pers. Soc. Psychol.89, 488–503 (2005). [DOI] [PubMed] [Google Scholar]
  • 9.Payne B. K., Correll J. The weapon identification task: Issues and recommendations. in Adv. Exp. Soc. Psychol. Vol. 62, 1–50 (Elsevier, 2020).
  • 10.Payne, B. K., Shimizu, Y. & Jacoby, L. L. Mental control and visual illusions: Toward explaining race-biased weapon misidentifications. J. Exp. Soc. Psychol.41, 36–47 (2005). [Google Scholar]
  • 11.Freeman J. B., Stolier R. M., Brooks J. A. The neural representational geometry of social perception. in Adv. Exp. Soc. Psychol. Vol. 61, 237–287 (Academic Press, 2020). [DOI] [PMC free article] [PubMed]
  • 12.Freeman, J. B., Stolier, R. M., Brooks, J. A. & Stillerman, B. S. The neural representational geometry of social perception. Curr. Opin. Psychol.24, 83–91 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Stolier, R. M. & Freeman, J. B. A neural mechanism of social categorization. J. Neurosci.37, 5711–5721 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Gilbert, C. D. & Li, W. Top-down influences on visual processing. Nat. Rev. Neurosci.14, 350–363 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Summerfield, C. & de Lange, F. P. Expectation in perceptual decision making: Neural and computational mechanisms. Nat. Rev. Neurosci.15, 745–756 (2014). [DOI] [PubMed] [Google Scholar]
  • 16.Diaz, J. A., Queirazza, F. & Philiastides, M. G. Perceptual learning alters post-sensory processing in human decision-making. Nat. Hum. Behav.1, 0035 (2017). [Google Scholar]
  • 17.Egner, T., Monti, J. M. & Summerfield, C. Expectation and surprise determine neural population responses in the ventral visual stream. J. Neurosci.30, 16601–16608 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Folstein, J. R., Palmeri, T. J. & Gauthier, I. Category learning increases discriminability of relevant object dimensions in visual cortex. Cereb. Cortex23, 814–823 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Kuai, S.-G., Levi, D. & Kourtzi, Z. Learning optimizes decision templates in the human visual cortex. Curr. Biol.23, 1799–1804 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Freeman, J. B. & Ambady, N. A dynamic interactive theory of person construal. Psychol. Rev.118, 247–279 (2011). [DOI] [PubMed] [Google Scholar]
  • 21.Freeman, J. B. & Johnson, K. L. More than meets the eye: Split-second social perception. Trends Cogn. Sci.20, 362–374 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Bar, M. et al. Cortical mechanisms specific to explicit visual object recognition. Neuron29, 529–535 (2001). [DOI] [PubMed] [Google Scholar]
  • 23.Haxby, J. V. et al. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science293, 2425–2430 (2001). [DOI] [PubMed] [Google Scholar]
  • 24.Gauthier, I. et al. The fusiform “face area” is part of a network that processes faces at the individual level. J. Cogn. Neurosci.12, 495–504 (2000). [DOI] [PubMed] [Google Scholar]
  • 25.Chao, L. L., Haxby, J. V. & Martin, A. Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects. Nat. Neurosci.2, 913–919 (1999). [DOI] [PubMed] [Google Scholar]
  • 26.Creem-Regehr, S. H. & Lee, J. N. Neural representations of graspable objects: Are tools special?. Cogn. Brain Res.22, 457–469 (2005). [DOI] [PubMed] [Google Scholar]
  • 27.Stolier, R. M. & Freeman, J. B. Neural pattern similarity reveals the inherent intersection of social categories. Nat. Neurosci.19, 795–797 (2016). [DOI] [PubMed] [Google Scholar]
  • 28.Brooks, J. A., Chikazoe, J., Sadato, N. & Freeman, J. B. The neural representation of facial-emotion categories reflects conceptual structure. Proc. Natl Acad. Sci. USA116, 15861–15870 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Barnett, B. O., Brooks, J. A. & Freeman, J. B. Stereotypes bias face perception via orbitofrontal–fusiform cortical interaction. Soc. Cogn. Affect. Neurosci.16, 302–314 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Wen, H., Shi, J., Chen, W. & Liu, Z. Deep residual network predicts cortical representation and organization of visual features for rapid categorization. Sci. Rep.8, 3752 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Thompson-Schill, S. L., D’Esposito, M., Aguirre, G. K. & Farah, M. J. Role of left inferior prefrontal cortex in retrieval of semantic knowledge: A reevaluation. Proc. Natl Acad. Sci. USA94, 14792–14797 (1997). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Bar, M. & Aminoff, E. Cortical analysis of visual context. Neuron38, 347–358 (2003). [DOI] [PubMed] [Google Scholar]
  • 33.Livne, T. & Bar, M. Cortical integration of contextual information across objects. J. Cogn. Neurosci.28, 948–958 (2016). [DOI] [PubMed] [Google Scholar]
  • 34.Binder, J. R., Desai, R. H., Graves, W. W. & Conant, L. L. Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cereb. Cortex19, 2767–2796 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Lupyan, G., Thompson-Schill, S. L. & Swingley, D. Conceptual penetration of visual processing. Psychol. Sci.21, 682–691 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Greene, M. R. & Fei-Fei, L. Visual categorization is automatic and obligatory: Evidence from Stroop-like paradigm. J. Vis.14, 14 (2014). [DOI] [PubMed] [Google Scholar]
  • 37.Çukur, T., Nishimoto, S., Huth, A. G. & Gallant, J. L. Attention during natural vision warps semantic representation across the human brain. Nat. Neurosci.16, 763–770 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Brady, T. F. & Oliva, A. Statistical learning using real-world scenes: Extracting categorical regularities without conscious intent. Psychol. Sci.19, 678–685 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Glaser, J. & Knowles, E. D. Implicit motivation to control prejudice. J. Exp. Soc. Psychol.44, 164–172 (2008). [Google Scholar]
  • 40.Correll, J., Park, B., Judd, C. M. & Wittenbrink, B. The influence of stereotypes on decisions to shoot. Eur. J. Soc. Psychol.37, 1102–1117 (2007). [Google Scholar]
  • 41.Hehman, E., Flake, J. K. & Calanchini, J. Disproportionate use of lethal force in policing is associated with regional racial biases of residents. Soc. Psychol. Personal. Sci.9, 393–401 (2018). [Google Scholar]
  • 42.Lai, C. K. et al. Reducing implicit racial preferences: II. Intervention effectiveness across time. J. Exp. Psychol. Gen.145, 1001–1016 (2016). [DOI] [PubMed] [Google Scholar]
  • 43.Forscher, P. S., Mitamura, C., Dix, E. L., Cox, W. T. & Devine, P. G. Breaking the prejudice habit: Mechanisms, timecourse, and longevity. J. Exp. Soc. Psychol.72, 133–146 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Spencer, K. B., Charbonneau, A. K. & Glaser, J. Implicit bias and policing. Soc. Personal. Psychol. Compass10, 50–63 (2016). [Google Scholar]
  • 45.de Beeck, H. P. & Baker, C. I. The neural basis of visual object learning. Trends Cogn. Sci.14, 22–30 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Richter, D., Ekman, M. & De Lange, F. P. Suppressed sensory response to predictable object stimuli throughout the ventral visual stream. J. Neurosci.38, 7452–7461 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Webster, M. A. Visual adaptation. Annu. Rev. Vis. Sci.1, 547–567 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Shropshire, J. L. & Johnson, K. L. Harnessing visible representation to mitigate bias. Policy Insights Behav. Brain Sci.8, 27–33 (2021). [Google Scholar]
  • 49.Payne, B. K., Burkley, M. A. & Stokes, M. B. Why do implicit and explicit attitude tests diverge? The role of structural fit. J. Pers. Soc. Psychol.94, 16–31 (2008). [DOI] [PubMed] [Google Scholar]
  • 50.Willenbockel, V. et al. Controlling low-level image properties: The SHINE toolbox. Behav. Res. Methods42, 671–684 (2010). [DOI] [PubMed] [Google Scholar]
  • 51.Xie, S. Y., Flake, J. K., Stolier, R. M., Freeman, J. B. & Hehman, E. Facial impressions are predicted by the structure of group stereotypes. Psychol. Sci.32, 1979–1993 (2021). [DOI] [PubMed] [Google Scholar]
  • 52.Dale, A. M. Optimal experimental design for event-related fMRI. Hum. Brain Mapp.8, 109–114 (1999). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Smith, S. M. & Nichols, T. E. Threshold-free cluster enhancement: Addressing problems of smoothing, threshold dependence and localisation in cluster inference. NeuroImage44, 83–98 (2009). [DOI] [PubMed] [Google Scholar]
  • 54.Amodio, D. M., Devine, P. G. & Harmon-Jones, E. Individual differences in the regulation of intergroup bias: The role of conflict monitoring and neural signals for control. J. Pers. Soc. Psychol.94, 60–74 (2008). [DOI] [PubMed] [Google Scholar]
  • 55.Kriegeskorte, N., Mur, M. & Bandettini, P. Representational similarity analysis: Connecting the branches of systems neuroscience. Front. Syst. Neurosci.2, 4 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.DeBruine L., Jones B. Face Research Lab London Set. Figshare Dataset 10.6084/m9.figshare.5047666.v5 (2017).
  • 57.Morey, R. D. Confidence intervals from normalized data: A correction to Cousineau (2005). Tutor. Quant. Methods Psychol.4, 61–64 (2008). [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Information (499.9KB, pdf)
Reporting Summary (4.6MB, pdf)

Data Availability Statement

Data and materials are publicly available via the OSF: https://osf.io/j4n5w (10.17605/osf.io/j4n5w). Due to ethical and privacy considerations, raw data are not publicly posted. Interested researchers may contact the corresponding author to discuss access under appropriate data-sharing agreements.

Analysis code is publicly available via the OSF: https://osf.io/j4n5w (10.17605/osf.io/j4n5w).


Articles from Nature Communications are provided here courtesy of Nature Publishing Group

RESOURCES