Skip to main content
Oxford University Press logoLink to Oxford University Press
. 2022 Nov 11;102(1):23–44. doi: 10.1093/sf/soac115

Identification in Interaction: Racial Mirroring between Interviewers and Respondents

Robert E M Pickett 1,, Aliya Saperstein 2, Andrew M Penner 3
PMCID: PMC10347422  PMID: 37456911

Abstract

Previous research has established that people shift their identities situationally and may come to subconsciously mirror one another. We explore this phenomenon among survey interviewers in the 2004-2018 General Social Survey by drawing on repeated measures of racial identification collected after each interview. We find not only that interviewers self-identify differently over time but also that their response changes cannot be fully explained by several measurement-error related expectations, either random or systematic. Rather, interviewers are significantly more likely to identify their race in ways that align with respondents’ reports. The potential for affiliative identification, even if subconscious, has a range of implications for understanding race-of-interviewer effects, the social construction of homophily, and for how we consider causality in studies of race and racial inequality more broadly.

Keywords: social interaction, conversation analysis, social action


Research demonstrates that acts of identification and how people present themselves to others are profoundly influenced by social interaction and context (Antaki and Widdicombe 1998; Ashmore, Deaux, and McLaughlin-Volpe 2005; Richeson and Somers 2016).1 Racial identification, in particular, is influenced by myriad factors and is more malleable than commonly believed (Bratter and O’Connell 2017; Liebler et al. 2017). It is also well established that, when interacting, people tend to mirror each other’s behavior; replicating gazes, posture, vocal tone, yawns, and more (see Ambady and Weisbuch 2010 for a review). These three lines of inquiry span subfields in sociology and psychology and are typically treated as distinct. Their combined insights, however, lead us to ask whether people engaged in interaction might also mirror racial identifications?

In particular, we explore how survey interviewers record their racial identification when asked to categorize themselves following each interview they conduct. Our data come from the General Social Survey (GSS), which includes repeated racial self-identifications of survey interviewers from 2004 to 2018 (Smith et al. 2018). Whereas previous research compared racial identifications collected over intervals of several months, years, or decades (e.g., Harris and Sim 2002; Liebler et al. 2017; Saperstein and Penner 2012), our data include reports over months, weeks, and even within the same day. These data allow us to explore whether an interviewer’s racial identification varies across interviews in ways that are patterned by interactional or contextual factors.

Our analysis reveals that interviewers are more likely to align their racial self-identification with the respondent’s identification than would be expected due to chance. Stable interviewer racial identification is the norm in our data, as would be expected given the long history and institutionalization of racial categorization in the United States, but patterns of fluidity are not fully explained by either random measurement error or several plausible hypotheses related to systematic error. Rather, observed changes are more consistent with our hypothesized process of racial mirroring. The potential for mirroring in identification, whether conscious or not, has implications for how we conceptualize race and racial identification, how those concepts are applied in survey research and design, and how researchers explain racial inequality—including processes of homophily—more generally.

Racial Identification in Time and Context

Although social scientists typically understand race as socially constructed and not biologically or genetically determined (e.g., American Sociological Association 2003), most social science research continues to treat race as a fixed individual characteristic, leaving changes in racial identification relatively understudied. Over the past two decades, with increasing access to longitudinal data, a body of work has emerged describing the levels, correlates, and consequences of racial fluidity, in the United States and beyond.

Typically, researchers in this area highlight changes in racial identification that occur over time or as context changes (Roth 2016). Over time, individuals can come to understand and/or present themselves differently, changing how they self-identify and how others classify them (Liebler et al. 2017; Saperstein and Penner 2012). This may occur, in part, because cultural conceptions of race change (Loveman 2014; Loveman and Muniz 2007), people’s self-conceptions change (Jiménez 2010), and/or the available response options change (Chang 1999; Loveman, Muniz, and Bailey 2012). Beyond changes over time, shifting contexts can also change how individuals see themselves or others (Roth 2012). For example, adolescents may identify differently if asked about their race at school or at home (Harris and Sim 2002), individuals may align their self-presentation with their goals and expectations (Renfrow 2004), and the same people can be perceived differently depending on their surroundings (Freeman et al. 2015).

It is also useful to explore how racial identification shifts across interactions. We distinguish between racial categorization in “interactions” and across “contexts”: the former refers to instances of racial categorization in particular moments while engaging with particular individuals; the latter refers to changes in categorization across the different social locations that structure those interactions. Most prior research has addressed how changes in racial identification relate to changes in context. A change in interaction, on the other hand, refers to a new interpersonal encounter in which an individual could be called upon to identify or perform their race. Although our focus is on micro-level interaction, it is important to acknowledge that these interactions occur within broader dynamics of structural racism (Bailey et al. 2017; Rucker and Richeson 2021) and shifting racial boundaries and hierarchies (Omi and Winant 1994; Wimmer 2012). Here we seek to contribute to existing meso- and macro-level work on racial formation, negotiation, and boundary crossing (see Saperstein, Penner, and Light 2013 for a review) by highlighting parallel processes occurring at the interactional level.

Racial Identification in Interaction

To understand how someone’s racial identification might change from one interaction to the next, we draw on the broad traditions of social interactionism (Blumer 1969), social psychological work on collective identity (Ashmore, Deaux, and McLaughlin-Volpe 2005), theories of situational ethnicity (Okamura 1981), and schematic cognition (Brubaker, Loveman, and Stamatov 2004). Each of these perspectives emphasizes how individuals negotiate shared meanings. As Ridgeway (2009:147) notes, “We need a shared way of categorizing and defining ‘who’ self and other are in the situation so that we can anticipate how each of us is likely to act and coordinate our actions accordingly.” These perspectives see racial and ethnic categories as rooted in cultural schemas (Roth 2012), suggesting a process of simplification whereby individuals recognize an instance of a given category and use their schematic knowledge to fill in relevant details to guide future perception (DiMaggio 1997). Racial schemas are the negotiated meanings that individuals can deploy (or not) and ascribe to others (or resist ascription) depending on their salience in a given interaction (Day 1998).

Our case, a survey interview, is an instance where individuals are likely to interactionally negotiate their identities. For example, if the interviewer is surprised by the respondent’s racial self-identification, their frame of reference for which schema is at play might shift (Sewell 1992). Alternatively, deploying the same racial schema may have different effects in different interactions (Sewell 1992: 18). Through subtle and perhaps subconscious negotiation, interviewers and respondents may align their schematic understandings of race, including who falls into or out of a given category, what those categories represent, and the acceptable criteria for membership (cf. Morning 2018).

Individuals, however, do not reinvent the concept of “race” in each interaction (Okamura 1981). The negotiation of shared meaning occurs against a backdrop of extensive cultural repertoires that simplify the identification process (Stets 2006), including the routinization of declaring one’s race on various forms. We might expect, then, that the schemas interviewers and respondents use to identify themselves and others remain relatively stable but shift at the margins to accommodate unique aspects of an interaction.

Previous research also finds that racial identification fluidity is not equally likely across individuals. Studies in the contemporary United States find that changes in racial identification are more common among people who ever identify as Hispanic (e.g., Brown, Hitlin, and Elder 2006), American Indian or Alaskan Native (e.g., del Pinal and Schmidley 2005), Pacific Islander (e.g., Liebler et al. 2017), or multiracial (e.g., Doyle and Kao 2007), and less common among people who initially identify as monoracial White or Black. Inconsistency between how one identifies and how one is perceived is also unequally distributed across groups. Self-identified American Indians and people with multiracial ancestry are the most likely to experience this type of “racial contestation,” while self-identified monoracial White and Black Americans are the least likely, with Asian and Hispanic Americans falling in between (Vargas and Kingsbury 2016). To date such research has focused on the frequency and correlates of racial identification among survey respondents. Our data include information about both interviewers and respondents, making it uniquely suited to explore interactional aspects of racial identification.

Survey Interviews as Affiliative Interactions

When individuals attempt to build rapport in an affiliative interaction, they can start to exhibit mirroring behaviors. This mirroring can take several forms, from the explicit, such as highlighting shared tastes and preferences (“I like that band too!”), to the implicit, as with the tendency to mirror non-verbal behavior such as accents, gaze, and yawns (Ambady and Weisbuch 2010). These processes appear to be fundamental to human behavior, as research suggests that observing others can cause sympathetic brain activity. For example, seeing someone express pain may cause pain-like brain activation in ourselves (Botvinick et al. 2005).

Thus, we expect interviewers who intend to build rapport may come to mirror respondents, whether they are aware of it or not. Interviewer trainings often stress building rapport with respondents to increase response rates and the respondent’s willingness to answer sensitive, personal, questions (Sun, Conrad, and Kreuter 2021). Previous research using recorded interviews and conversation analysis finds that interviewers employ a variety of observable behaviors to keep respondents engaged, from acknowledging responses and assisting with difficult questions to initiating or reciprocating laughter (e.g., Garbarski, Schaeffer, and Dykema 2016). Such studies demonstrate that survey interviews are “collaborative achievements” (see also Maynard et al. 2002) but have yet to explore how the interactional context could influence racial identification.

Assuming interviewers are prone to subtle mirroring behaviors through their attempt to build rapport, we expect to observe mirroring in racial identification for two reasons. First, race will be made salient throughout the interaction whenever the topic arises on the questionnaire.2 Second, in the GSS, both interviewers and respondents are asked to identify their race and Hispanic origin.3 The interviewer does so in private, however, after the interview, so their response is likely subconscious rather than an intentional signal of affiliation. We explore the dyadic patterns of race reporting in our data to examine whether interviewers mirror respondents’ racial identifications. When we observe changes in interviewer’s racial identifications over time, we consider whether this yields more (or less) alignment between respondent and interviewer identifications than would be expected by chance.

Data and Methods

The GSS is a longstanding nationally representative sample of US adults living in households, fielded by NORC. These data are uniquely suited for our purposes because, beginning in 2004, GSS interviewers self-identify their race and Hispanic origin following each interview. Interviewers record this information along with other aspects of the interview context (e.g., if the respondent’s spouse was present) in the interviewer remarks section, which must be completed before beginning the next interview. Using pooled cross-sections from 2004 to 2018, we track changes in interviewers’ racial identifications as they engage in interactions with different respondents.4 We focus on race and Hispanic origin because they are the only interviewer characteristics collected repeatedly after each interview, not because we expect mirroring behavior to be specific to racial identification.

We use unique interviewer identification numbers to explore variation in identification within interviewers. However, we are limited to looking at changes within interviewer-year units as we cannot track interviewers across survey years.5 For example, if the same interviewer worked in 2004 and 2006, she was assigned different identification numbers each year, so we explore how she identified herself within 2004 and within 2006 separately. Within a given year, GSS interviewers conducted an average of 17 interviews, ranging from as few as one to as many as 120 interviews, providing 20,845 reports from 1,262 interviewer-years.6

Interviewer Racial Identification

After completing the respondent’s portion of the questionnaire and recording information about the interview context, interviewers are asked their Hispanic origin (“Interviewer, are you Spanish, Hispanic, or Latino/Latina?”) and racial identification (“What is your race? Indicate one or more races that you consider yourself to be,” with the response options: White, Black or African-American, American Indian or Alaska Native, Asian Indian, Chinese, Filipino, Japanese, Korean, Vietnamese, Other Asian, Native Hawaiian, Guamanian or Chamorro, Samoan, Other Pacific Islander, and Some Other Race). This two-question format is similar to the one used in the US Census. Except for the initial prompt that makes it clear the interviewer should answer about themselves, the wordings are identical to the questions asked of respondents and, as with the respondents, interviewers could record up to three race responses.7

Although Hispanic origins are intended to be recorded separately in this format, some respondents and interviewers selected Some Other Race and specified Hispanic (5% and 1%, respectively). Among people who identified as having either a Hispanic origin or race, it was most common to report a Hispanic origin without also reporting a Hispanic race (63% of respondents’ and 84% of interviewers’ responses), and least common to identify one’s race as Hispanic but not report a Hispanic origin (0.06% of respondents’ and 0.05% of interviewers’ responses). Given these reporting patterns, we include “Hispanic” among the racial categories we investigate. Nevertheless, because of the two-question format, interviewers can have a fluid Hispanic origin identification, a fluid racial identification, both, or neither.8

Timelines of race and origin responses for three illustrative interviewer-years can be found in Figure 1. The examples are intended to show the range of variation in the data, including interviewer-years that had both stable and fluid Hispanic origins and race responses. Figure 1 also shows responses that switched from a single race to multiple races, or between two single race responses. Our measures of racial identification treat any given combination of categories as distinct (e.g., monoracial White is distinct from White and Black), but our conclusions are not dependent on this coding decision.9

Figure 1.

Source: General Social Survey. Note: Survey years typically begin in early March.

Sample interviewer-year timelines.

The first example in Figure 1, interviewer number 137, conducted five interviews in 2006. The interviewer identified as Black with non-Hispanic origins three times (March 11th, 19th, and 25th) and twice identified as both Black and American Indian with non-Hispanic origins (March 22nd and April 24th). We consider Hispanic origin to be “stable” for this interviewer-year because the interviewer always reported the same origin, and we consider their racial identification to be “fluid” because they reported different race categories at different times. Our second example, interviewer 18 in 2006, also had a stable non-Hispanic origin and was racially fluid, identifying as White on March 18th and 23rd, and as Black for the remaining fourteen interviews. Lastly, interviewer 138 in 2006 exemplifies stable racial identification and fluid Hispanic origin identification. For most of the year they identified as Hispanic, except for June 10th and 20th when they identified as non-Hispanic. The variation exemplified by these three cases provides the foundation for our analyses below.

In addition to these repeated post-interview identifications, each interviewer-year has a static race classification based on NORC personnel files (hereafter, “HR files”). The HR file information is self-reported on a form after the interviewer was hired.10 This record combines race and origin and provides a different set of categories from those available after each survey interview. (The HR categories are limited to: White, Black, Hispanic, Asian, two or more races, and no answer.) Thus, the HR file responses do not necessarily match the post-interview responses, even for interviewers with stable race and origin identifications.

Descriptive statistics for the 1,262 interviewer-years are presented separately for the sample of interviewer-years with at least one change in racial identification (“Fluid Self-id” in Table 1) and for the sample of interviewer-years with no such change (“Stable Self-id”). As expected, based on previous research (e.g., Liebler et al. 2017), interviewers recorded as White in their HR file are least likely to have fluid racial identification in a given year (14%), followed by interviewers recorded as “Two or More Races” (28%), Hispanic (31%), Black (33%), and Asian (50%). Importantly, though, fluid responses are not limited to a subset of interviewers, such as interviewers who are recorded as Hispanic or multiracial in their HR file.11 Further, men and women are approximately equally likely to have a fluid racial identification, and interviewer-years with stable and fluid racial identification tend to be of comparable age, NORC tenure, and GSS interview caseload.12 This suggests fluid racial identification is a phenomenon experienced by a wide range of interviewers.

Table 1.

Interviewer-Year Characteristics by Stable or Fluid Racial Identification

Stable racial-id Fluid racial-id Total
HR racial identification (static)
 White 86% (710) 14% (114) 824
 Black 67% (147) 33% (73) 220
 Hispanic 69% (86) 31% (39) 125
 Asian 50% (9) 50% (9) 18
 Two or more races 72% (34) 28% (13) 47
Gender
 Women 80% (796) 20% (203) 999
 Men 81% (195) 19% (47) 242
Mean age (years) 53 53
Mean NORC tenure (years)a 3.30 3.35
Interview caseload
 At or below average (17 or fewer) 77% (629) 23% (183) 812
 Above average (17 or more interviews) 77% (346) 23% (104) 450
Post-survey Hispanic origin
 Stable 83% (975) 17% (198) 1,173
 Fluid 35% (31) 65% (58) 89
Total 80% (1006) 20% (256) 1,262

aA two-tailed t-test for the difference in mean NORC tenure between stable and fluid interviewers is non-significant: t = −0.1615, p = 0.8718.

Finally, we cross-tabulate stable or fluid post-survey racial identification and stable or fluid Hispanic origin identification (Table 1). Though there is some overlap, there are 3.9 times as many interviewer-years that exhibit fluidity on one dimension or the other compared to both. Thus, interviewers experience fluidity in both their Hispanic origin and racial identification, but these changes need not occur simultaneously.

Analytic Approach

Our analysis proceeds in several steps. Once we establish the overall level of fluidity in the sample, we explore the timing of changes arguing that rapid changes (e.g., within the same day) more plausibly suggest interaction-specific influences on identification, whereas changes further apart in time or in patterned ways across a survey year are less likely to be interaction specific. Next, we explore the relationship between respondent and interviewer identification, including whether observed matches can be explained by random or systematic measurement error. Lastly, we use fixed effects regression to explore whether aspects of the interview interaction influence how interviewers racially identify.13

Measurement-error simulations

Our assessment of measurement error begins with a randomization test to determine whether respondents and interviewers match more often than would be expected if their identifications were independent. To do so, we count how many times interviewers reported a racial category that matched any of the categories chosen by the respondent and then randomly reorder the respondents within interviewers. We then recount the matched responses in this new data and repeat this procedure 10,000 times to generate a distribution of how much matching would exist if interviewer and respondent identification were independent. Randomizing the order of respondents ensures that matching racial identifications can only occur by chance. If the actual amount of matching is an outlier in this distribution, then the racial identification matching we observe is unlikely to result from chance alone.

Next, we consider cases where interviewers exactly match all the respondent’s identifications. These responses could represent confused interviewers who thought they were supposed to identify the respondent’s race rather than their own. We adjust our randomization test to exclude all exact matches between interviewer and respondent and instead determine whether aligned responses (i.e. when interviewers and respondents match on at least one but not all identifications) are more common than would be expected by chance. Interviewer characteristics, such as their HR racial identification, represent alternative systematic measurement-error-related explanations for the observed changes; see Table 1 and Appendix A for evidence that characteristics such as identifying as Hispanic (cf. Alba, Lindeman, and Insolera 2016) cannot fully account for the level or patterns of fluidity in these data.

To help visualize the observed response patterns, we report an illustrative cross-tabulation of interviewer and respondent racial identifications among interviewers who tend to identify as monoracial White (i.e., they select White alone more than 65% of the time). This allows us to explore the extent of response alignment absent especially fluid outliers. We focus on White identification to maximize our sample size as the interviewer pool predominantly identifies as White (see Table 1). We fit a Poisson regression model to the cross-tabulation to estimate the relative frequency that interviewers report a non-White race that exactly matches the respondent (i.e., reporting Black alone when the respondent does the same), and the frequency of interviewers reporting a non-White race that is aligned with the respondent (i.e., reporting White and Black when the respondent identifies as Black alone).14 The model includes covariates for the rows and columns (to control for overall reporting tendencies), a dummy variable identifying cells with an exact match, and a dummy variable identifying cells with an aligned response.15

Multivariable models

We model the interviewer’s racial identification as a function of characteristics of both the respondent and the interviewer while controlling for interviewer-year fixed effects to account for stable interviewer characteristics. If some interviewers are more or less likely to make recording errors, are assigned to particular areas, are differently specialized, were trained differently across years, or are predisposed to a particular race response, as long as those characteristics are fixed for a given interviewer-year they cannot drive our results. By including interviewer-year fixed effects we exclude variation between interviewer-years and instead only identify differences within interviewer-years.16 Thus, the models do not predict which interviewers are more likely to identify as White overall, they predict under what circumstances, in a given year, the interviewer is more likely to self-identify as White. Interviewer-year fixed effects are more conservative than interviewer fixed effects would be, as they are equivalent to the interaction between interviewer and year fixed effects. In each model, standard errors are clustered by interviewer-year to account for correlations among responses within interviewer-years.

These models address concerns about some scenarios of random measurement error in reporting as well. If observed change is the product of random measurement error, then periods of fluidity would be randomly distributed over time and should be independent of other variation after controlling for interviewer-year fixed effects.17 Although random measurement error could introduce instability in our coefficients, it would not introduce consistent bias.18

We estimate separate linear probability models predicting the interviewer’s racial identification for all monoracial categories that have sufficiently large sample sizes in the GSS: White, Black, Asian Indian, Chinese, Some Other Race, and Hispanic, whether reported on the origin question or by writing in a Hispanic racial identification.19 These outcomes are binary, with the category of interest compared to all other possible responses; nevertheless, we argue linear probability models are preferable to logistic regression models for several reasons. First, because we incorporate interviewer-year fixed effects, logistic regressions are only estimable conditional on interviewers experiencing a change in the dependent variable in a given year whereas linear probability models can incorporate information from all interviewer-years. Second, because we are interested in the marginal effects on racial identification and not in predicting the identification of interviewers, we are less concerned about the possibility of out of bounds prediction.20

All models include controls for the logged population size of the place of interview, whether the interview was conducted in Spanish, and whether the interviewer reported a Hispanic origin following the interview. We also account for other characteristics of the respondent, including whether the respondent had ever been married, their age, and their interviewer-recorded sex category (female/male).

Limitations

Though we treat our data and apply our methods as carefully as possible, no study is without limitation. First, we cannot make strong generalizations based on a convenience sample of interviewers (as opposed to the probability sample of respondents) but, as we address in the discussion, we expect our results to generalize beyond these interviewers and this survey setting. Second, if interviewers work for multiple years at NORC, we cannot follow them across years. We also cannot be sure of the order of interviews conducted by a single interviewer within the same day.21 Finally, linear probability models are susceptible to heteroskedasticity in the error distributions and incorrectly specifying the functional form if the true relationship is nonlinear. Even with these limitations, our analysis provides unique insight into how frequently and under what circumstances a person’s racial identification changes across interactions.

Results

We find at least one change in the interviewer’s self-identified race or Hispanic origin in 23 percent of interviewer-years between 2004 and 2018.22 Such changes occur, on average, in 5 percent of GSS interviews (see Table 2).23 Fluidity of this magnitude is in line with previous research but not so common as to suggest an absence of social constraints on individuals decisions about how to identify.24 To unpack the variation in racial identification, we first explore whether fluidity is patterned by interview timing. We go on to consider additional measurement-error-related explanations, both random and systematic, before examining whether the patterns of fluidity are consistent with racial mirroring.

Table 2.

Frequency of Racial Fluidity across Time Windows

Survey year Same day Five days Ten days Fifteen days Thirty days
5.0% 4.2% 4.6% 4.8% 4.8% 4.8%

Source: General Social Survey, 2004–2018. Frequencies of response change are calculated among the set of consecutive interviews in which changes are possible (e.g., the “Same Day” percentage is among days where multiple interviews were conducted, “Ten Days” corresponds to multiple interviews conducted anytime from on the same day through 10 days apart). When multiple interviews were conducted in a single day, ties are broken by computing all possible orderings and taking the average proportion of changes across them.

Interview Timing

We start by asking whether racial identifications are more likely to differ between interviews that occur closer together or further apart in time. We do this by examining all interviews conducted by the same interviewer on the same day, consecutive interviews conducted within five days, within ten days, within fifteen days, within thirty days, and finally all pairs of interviews over the survey year. Table 2 shows that there is a high degree of similarity across the different time windows, ranging from 4.2% in the same day to 4.8% within a 30-day window. This relative uniformity is consistent with the idea that observed fluidity does not occur only as shifts between relatively stable identities, but that the context of each interview could influence interviewers’ identification.

We also explore how fluidity is distributed across the survey period by visually comparing plots of interviewer-year, mean-centered racial identification over time to ideal-type plots for various scenarios. By removing the interviewer-year mean before plotting the probabilities, the differences shown are within interviewer-years (akin to interviewer-year fixed effects). Thus, the plots are unaffected by changes in the interviewer pool over time both within and between years (e.g., if the interviewer pool becomes older or younger, or more educated or less educated). These plots are normalized to start at day zero—the first interview in each survey year—and allow us to explore whether there are long-term trends in reporting specific categories, or if particularly odd days or weeks produced anomalies in data collection.

Figure 2 shows three ideal-type plots reflecting different scenarios of interview timing influencing racial identification. In each plot a solid line is drawn at zero and a dashed line shows the average within-interviewer-year change over time. The first scenario is a monotonic increase, which would occur if individual interviewers became more likely to identify as the given category over the survey period. The second scenario illustrates a cyclical pattern if certain categories were reported more often on certain days of the week—e.g., if interviewers are more likely to select the first option (White) on Fridays.25 Finally, if interviewers select a category at higher- or lower-than-usual levels on a particular day or week, the points for those days would create a spike or dip far from zero. Figure 3 plots the observed probabilities of identifying as the four largest categories White, Black, Hispanic, and Some Other Race, which we compare to the ideal types in Figure 2.

Figure 2.

Note: Day-of-the-week and day-of-the-year plots represent plausible scenarios of systematic bias. These images provide benchmarks to compare against observed patterns in Figure 3. The zero line represents the average racial identification within interviewer-years, and deviations from these lines represent less common identifications for that interviewer-year.

Ideal-type simulations of interviewer-year mean-centered probabilities of racial identification.

Figure 3.

Note: All plots refer to when these categories are selected alone and not in combination with other categories. Day zero is the first interview in each survey year. aHispanic includes identification as either Hispanic race or origin.

Observed interviewer-year mean-centered probabilities of racial identification across the survey year.

Overall, the points in all four panels of Figure 3 are tightly clustered around zero, with no noticeable cyclical rhythm, or anomalous weeks where probabilities change dramatically. Nevertheless, several features of these graphs are worth noting. First, there are a few outliers, particularly when interviews start each year. It is possible that there is greater variation in responses as interviewers become accustomed to asking others to racially identify and routinely identifying themselves. Second, the variance is higher overall for White and Black, two larger categories that appear first and second in the list of responses. Lastly, the variance for all categories appears to be higher later in the year and there is a slight upwards trajectory in the frequency of identifying only as White. It is possible that multiracial White interviewers are more likely to drop their hyphenated status over the course of a survey year.26 These intuitions from visual inspection were confirmed by a series of regression models with fixed effects for day of the year, day of the week, weeks within the year, and general time trends. Although several estimates from these models were significant, considerably fewer reached statistical significance than would be expected based on chance alone.27

Chance, Error, or Alignment?

As the patterns of interviewer identification are not driven by either interviewer characteristics across the sample (see Table 1) or the timing of interviews, we turn to several measurement-error-related explanations for why the interviewers’ and respondents’ identifications might match. We first conduct a randomization test to see if matching occurs more often than would be expected if the two reports were independent. In the GSS data, we see 14,422 interviews with matching interviewer and respondent race reports. Of the 10,000 simulations where we randomized the order of respondents as described above, we find none have this many matches, corresponding to a two-tailed p-value of less than 0.001. Ninety-five percent of the iterations have between 13,981 and 14,025 matches, and the average was 14,003. These results indicate the observed amount of matching between interviewers and respondents exceeds the amount expected by chance, which supports our conclusion that variation in interviewer racial identifications was not produced by random error alone.

Beyond matching by chance, another error-based explanation for the observed fluidity is that interviewers mistakenly believe the race and origin questions in the interviewer remarks are asking about the respondent. In this hypothetical, matching responses result from interviewers incorrectly repeating the respondent’s answers rather than reporting their own identification. We offer three statistical tests that isolate exact matches—which could result from either systematic interviewer error or substantively meaningful response change—and attempt to rule out this hypothesis.

First, we repeat our simulation but exclude exact matches and instead focus on aligned responses—i.e., responses where interviewers and respondents share at least one, but not all, of their racial responses. Whereas an interviewer identifying as Black after interviewing a respondent who identifies as Black could represent either mirroring or mistaken repetition of the respondent’s answer, an interviewer reporting their identification as White and Black after a respondent identifies as Black does not reflect the same type of mistake. In our data, we see 1,582 interviews where interviewers aligned with respondents but do not match exactly, and of the 10,000 simulations, we find that none have as many aligned responses, corresponding to a two-tailed p-value of less than 0.001. Ninety-five percent of these iterations produced alignment counts between 1,508 and 1,536, with an average of 1,522. Thus, not only are interviewer and respondent race reports significantly more likely to match than expected at random, but interviewers are also significantly more likely to align with respondents in subtler ways.

Second, we exclude interviewer-years with even one exact match between race reports before randomizing interview order. Among this subset, we observe 1,485 aligned responses, which again significantly exceeds chance (randomization test mean: 1,443; 95% interval: (1,436, 1,451); corresponding p-value for a two-tailed test: less than 0.001). Thus, even when we exclude data for all interviewers who ever plausibly mistook the interviewer racial identification question to be asking about the respondent, we still find that interviewers align their racial identification with the respondent more often than can be explained by chance.

Third, we exclude interviewer-years in which racial identification is highly variable and cross-tabulate interviewer and respondent racial identifications only for interviewer-years in which monoracial White is reported more than 65% of the time. For legibility, Table 3 is a subset of the full contingency table, with bold cells to indicate exact non-White matches and italics cells to indicate non-White aligned identification. We then estimate a Poisson regression model on cell frequencies for the full cross-tabulation as described above. The coefficients for exact matches (5.05, robust t-statistic 13.5) and aligned responses (2.6, robust t-statistic 6.7) are statistically significant. Thus, even among interviewers who usually identify as monoracial White, we observe both more non-White alignment and more non-White matching in their race reports than expected due to chance alone.28

Table 3.

Cross-tabulation of Interviewer and Respondent Racial Identification among Interviewers Who Tend to Identify as White

graphic file with name soac115fx1.jpg

Source: General Social Survey, 2004–2018.

Note: Racial identification among interviewer-years with at least 65% monoracial White identifications. Row percentages are shown, with counts in parentheses. Rows or columns with fewer than five total non-monoracial White reports not shown (all five are not necessarily visible in this abbreviated table). Cells with bold text represent exact matches; cells with text in italics represent aligned responses. AIAN stands for American Indian and Alaska Native.

Racial Mirroring

To build on our simulations and Poisson model, we turn to fixed effects regression to explore patterns of racial identification more broadly throughout the sample. All else equal, we find that, within a given year, interviewers are between 3 and 9 percent more likely to report a race that matches the respondent’s self-identification than they are to report a non-matching category (see Table 4), and the increased probabilities are statistically significant for each category examined except for “Some Other Race”.29 This suggests interviewers are more likely to identify as monoracial White, Black, Asian Indian, Chinese, or Hispanic when the respondent also selected the same category. As shown in Table 5, these fixed effects results are robust when we exclude any interviewer who always matches the respondent’s exact racial identifications across the survey year. Together with the series of statistical tests described above, these estimates increase our confidence that observed patterns of fluidity include instances of interviewers mirroring the racial identifications of respondents and are not only the result of random chance or systematic errors such as interviewers who are mistakenly reporting the respondent’s answer for themselves.

Table 4.

Linear Probability Models Predicting Interviewer Racial Identification by Characteristics of the Interview

Interviewer identifies as…
White Black Asian Indian Chinese Some Other Race Hispanica
Respondent identifies as same race 0.055***(7.931) 0.063*** (6.283) 0.090*** (3.496) 0.057** (2.843) 0.050 (1.907) 0.037*** (3.662)
Interviewer identifies as Hispanic origin −0.016 (0.282) −0.127*** (−3.666) −0.023* (−2.256) −0.010 (−1.152) 0.042 (1.662)
Interview conducted in Spanish 0.015 (1.196) 0.004 (0.640) −0.001 (−0.629) 0.002 (1.839) 0.006 (1.015) 0.043** (2.922)
Respondent’s spouse was present 0.001 (0.474) 0.001 (0.601) −0.000 (−0.178) −0.000 (−1.234) 0.000 (0.393) 0.003 (1.592)
Respondent has never married −0.005 (−1.819) 0.005* (2.319) −0.001 (−1.913) 0.000 (0.595) −0.000 (−0.533) 0.000 (0.243)
Respondent’s age (decades) −0.001* (−2.348) 0.001 (1.898) 0.000 (0.347) 0.000 (0.020) 0.000 (0.026) −0.001 (−1.868)
Population (logged) −0.001* (−2.023) 0.001 (1.216) 0.000 (0.223) 0.000 (0.508) 0.000* (2.496) 0.001 (1.543)
Interviewer-year fixed effects Yes Yes Yes Yes Yes Yes

Note: Results from models predicting differences within a given interviewer for a particular survey year, t-statistics accounting for clustering within interviewer-year in parentheses. Each model has 20,619 observations over 1,259 interviewer-years. Models additionally control for respondent sex, which was nonsignificant.

* * * p < 0.001, **p < 0.01, *p < 0.05.

aIn this model, we predict identifying as Hispanic either by race or origin. Results are similar and statistically significant if we use only race or only origin.

Table 5.

Linear Probability Models Predicting Interviewer Racial Identification Excluding Interviewers Who Always Exactly Match the Respondent

Interviewer identifies as…
White Black Asian Indian Chinese Some Other Race Hispanica
Respondent identifies as same race 0.054*** (7.825) 0.063*** (6.283) 0.070** (2.983) 0.057** (2.843) 0.050 (1.906) 0.037*** (3.671)
Interviewer-year fixed effects Yes Yes Yes Yes Yes Yes

Note: Results from models predicting differences within a given interviewer for a particular survey year, t-statistics accounting for clustering within interviewer-year in parentheses. Models are analogous to those shown in Table 3. Each model has 19,651 observations over 1,087 interviewer-years. Controls not shown.

* * * p < 0.001, **p < 0.01, *p < 0.05.

aIn this model, we predict identifying as Hispanic either by race or origin. Results are similar and statistically significant if we use only race or only origin.

The appendix includes further robustness checks that vary our model specifications to ensure our results are not anomalous. These include models that only use the first racial identification for both interviewers and respondents (Table A2), subgroup comparisons based on characteristics of interviewers (Table A3), respondents (Table A5), and subsets of survey years (Table A4). In each case, our conclusion that interviewer identifications are more likely to mirror respondent identifications remains the same.

Discussion

This paper demonstrates that an individual’s racial identification can change as they engage in interactions with different people. We draw on a research tradition that suggests the salience of racial identities shifts interactionally and provide novel evidence that an individual’s racial identification may shift to mirror the identification of those with whom they are interacting.30 We explored several alternative explanations for this fluidity, and none can account for the patterns we observe. The observed fluidity is not explained solely by random measurement error and is not limited to particular types of interviewers who might be expected to account for inconsistent reporting (see Table 1, and Appendices A and B). Further, although it is possible that some interviewers may have mistaken the interviewer remarks for questions about the respondents—despite a clear questionnaire prompt to answer about themselves—we find evidence of racial mirroring beyond the exact matching such a mistake would produce. These data reveal consistent patterns of interviewers both exactly matching respondents and more subtly aligning responses (e.g., adding a race reported by the respondent to the interviewer’s typical response), congruent with social psychological research on imitating behavior and affiliative interactions.

Future research could interview the interviewers to study how they approach building rapport with survey respondents and further illuminate their role in the generation of survey data on race and ethnicity (cf. Wilkinson 2011). We see such work as a complement to but not a replacement for statistical analysis of racial fluidity in survey data. Understanding whether interviewers are aware of variation in their responses and how they seek to explain it can provide insight into how conceptualizations of race as static and obvious are maintained. However, we caution against treating racial identification fluidity as something that should be “fixed,” through admonishment in interviewer trainings or post-survey data cleaning.31 Researchers may achieve face validity by forcing survey responses to conform to static assumptions about race but doing so compromises construct validity because it obscures the interactional nature of racial identification.

Rethinking race-of-interviewer effects

That interviewers’ racial identifications can be as fluid as anyone else’s has implications for understanding how interviewers influence survey results in general (e.g., West and Blom 2017) and how we interpret “race-of-interviewer” effects in particular. Research on these effects typically assumes that race-matching respondents and interviewers reduces bias in survey responses by reducing social desirability bias (Krysan and Couper 2003). This view treats the interviewer’s race as a fixed external factor, to which the respondent reacts, rather than a form of identification (and perception) subject to interactional influences.

Notably, race-of-interviewer-effects research using the GSS appears to rely on the static measure recorded on HR forms and not the repeated post-interview identifications (see, e.g., An and Winship 2017).32 For analyses that require strict independence of interviewer reports from interactions with respondents, the HR race measure might be preferable. Even so, the HR categorization is the product of a previous interaction, when interviewers were asked to fill out a form for their employer, and the context and stakes of identification were different. Thus, the HR record may not represent the interviewer’s race at the time of the interview, either as they would identify it or as it would be perceived by the respondent.

Our results suggest that rather than thinking of race-of-interviewer effects as strictly exogenous influences on survey responses, we might think of the racial identifications of interviewers and respondents as mutually endogenous. Whatever response biases their identifications might indicate, how the respondent and interviewer racially identify result from a subtle interplay as both mutually negotiate their own identification, the identification of the other, and the proper schemas of interaction. In this sense, our findings suggest that efforts to “match” interviewers and respondents by race would entail a more dynamic process than is typically acknowledged (see also O’Brien 2011).

Beyond the Survey

Our discussion so far has focused on survey interviews, but we do not expect implications are limited to this case. Data collection on race and ethnicity often represents an interactional encounter, even if only implicitly as responses are weighed against the expectations of others. Implicit and explicit interactional goals can change how participants identify or present themselves (Kang et al. 2016; Richeson and Somers 2016; Ridgeway 2009), and whether the information was recorded as self-identified or observed also shapes responses (Roth 2016). Rather than viewing race as a straightforward demographic characteristic, racial categorization should be considered as crystalizing within particular interactions that subtly shape one’s identification.

Indeed, our results highlight that a person’s racial identification can change not only in response to macro-historical shifts in racial boundaries and categories, or in response to longer-term changes in their social position, but also within a given day depending on who they meet. This adds support to the claims of interactionists who argue for a subtler understanding of negotiated identities (e.g., Renfrow 2004). Much research on the malleability of race has focused on passing and tends to consider such changes as intentional, consequential, and costly. Rather than seeing racial fluidity solely as purposeful misrepresentation, we suggest that acts of racial identification and presentation can also reflect subtle, perhaps subconscious, efforts to align perspectives and ease interaction. That said, our findings do not imply a lack of social constraints on racial identification: stability is far more common than fluidity and some boundaries are more likely to be traversed than others. Yet, within existing constraints, interactional flexibility plays a role in shaping data on racial identification.33

More generally, we see racial mirroring as analogous to other instances when people, consciously or not, change their presentation of self to match the situation at hand. Take, for instance, the largely automatic aligning of accent and speech patterns which is hypothesized to simplify interactions and facilitate communication (Pardo 2006; Pickering and Garrod 2004). Though these transitory shifts in accent or dialect are common, people may not notice their behavior unless someone else points out the deviation from their normal patterns. Racial identification is another avenue where subtle alignment might facilitate reaching shared understanding. Although social constraints on shifting racial identification likely are stronger than constraints on shifting speech patterns, such that shifting identification is less common, the implications remain the same: both speech patterns and racial identifications emerge, and are best understood, in the context of dialogue.

In broader application, we expect to find racial mirroring whenever people are motivated to try to “fit in” or bridge differences. These situations range from students interacting in class (Boda 2019) to families navigating their individual and collective identities (Whitehead, Farrell and Bratter 2021). For example, the accuracy of projections for multiracial identification in the coming decades depends in part on whether parents and children identify differently when they are together versus when they are apart (cf. Bratter 2007; Harris and Sim 2002). Similarly, if married couples gravitate toward a shared identity, this has implications not only for whether and when we observe relationships to be inter- or intraracial but also for how we interpret subsequent outcomes like marital stability or psychological well-being (cf. Wong and Penner 2018). As the extensive literature on social selection and influence also suggests, homophily does not underlie all observed homogeneity (cf. Leenders 1997; Steglich, Snijders and Pearson 2010). Rather, relationships can become more homogeneous over time—or at particular points in time—to meet affiliative interactional goals (cf. Leszczensky and Pink 2019; Melamed et al. 2020). The opposite could also be true when the goal is to maximize difference or create interactional disruption (Tavory and Fine 2020). Either way, observed racial homophily or heterophily is shaped by the interactional processes that influence both individual and collective racial categorizations.

Conclusion

We began by observing that GSS interviewers do not report their racial identification consistently following each interview they conduct. Our analysis explored a series of factors that could contribute to such response change, and we conclude that interviewers aligning their racial identifications with the identifications of respondents is an important and overlooked explanation. Other explanations related to random measurement error or systematic biases, such as anomalies in interview timing, particular survey years, particular types of interviewers, or misunderstanding the interviewer questions to be about the respondent, cannot fully explain the observed pattern. Our hypothesized mechanism of racial mirroring is also consistent with prior research demonstrating the many ways individuals mimic each other in affiliative interactions.

Our results have implications for how to conceptualize and measure race and how to interpret research on racial inequality. As a growing body of work shows, race is not an inherent characteristic; it can be made and remade through interactions, changes in understandings of the self, and in how others are perceived. However, more research is needed on the interactional contexts that produce racial categorization. Interactions can lead to racial distancing as well as affiliative behavior such as mirroring, and stable categorization can also be an interactional accomplishment. For instance, conversations about particularly contentious issues could make spanning perceived social distance difficult and lead to lower-than-expected alignment rather than the higher rates we observe here. Future work should explore the potential for racial mirroring in settings where the stakes of identification are varied and the need to build rapport is less clear. Such work could help establish the contours of social constraint and when mirroring of racial identification is most (or least) likely.

Bounding effects and pinpointing mechanisms aside, simply acknowledging that racial categorization both influences interactions and can be influenced by them should lead to further caution in when we use race as an explanatory tool. More than two decades ago, Zuberi (2001) admonished researchers against mistaking causes for effects in studies of race and racism, and careful interpretation of determinants, consequences, and time-ordering in studies of racial inequality remain sorely needed. As our case illustrates: “race” does not predetermine a person’s perceptual biases, or how others will react to their presence. Interactional influence can flow in both directions, shaping how people understand themselves.

Supplementary Material

Appendices_soac115
replication_code_soac115

Footnotes

1

Our focus is identification, or the act of identifying oneself to others. However, we draw on literature on identity, broadly construed, in part because conceptual distinctions between the public and private facets of identity are not always made clear (see Brubaker and Cooper 2000).

2

The GSS includes many racial-attitude items but some have skip patterns that depend on answers to previous questions and not all questions appear on all ballots; thus, all respondents are not asked all items.

3

We refer to “Hispanic origin” throughout to signal it derives from a separate survey question, though distinctions between either race and ethnicity or “origin,” following US Census Bureau terminology, likely are not recognized by most respondents (e.g., Parker et al. 2015)

4

GSS interviewers are generally assigned to respondents who live nearby—roughly within a 50-mile radius, in non-rural areas; interviewers and respondents are not explicitly matched by race.

5

Our unit of analysis is an interviewer-year, and we maintain this specificity when describing our data and empirical findings, but we refer to “interviews” or “interviewers” for the sake of exposition when interpreting our results and discussing their implications.

6

Across years, the number of interviewers ranged from 134 in 2016 to 194 in 2012, with an average of 158. The average number of racial reports per interviewer ranged from 10 in 2012 to 25 in 2006, with an average of 17.

7

An example interviewer remarks form can be found at: http://gss.norc.org/documents/quex/2008%20REMARKS.pdf (retrieved 18 September 2022)

8

We consider people who report Hispanic origins or identify their race as Hispanic as part of a shared Hispanic category but, when relevant for methodological purposes, we note when we refer to the origin question or the race question.

9

Appendix A presents results using two alternate codings: (1) including racial identifications as a given category alone or in combination with other categories and (2) using only the first race response.

10

The HR information includes the interviewer’s sex, age, and years of NORC survey experience. The average interviewer was 53 years old and had worked at NORC for 3.3 years; 81 percent of interviewers were female.

11

Although we refer to an HR record of multiracial here, one could operationalize a multiracial interviewer in several ways with these data (e.g., selecting multiple categories following an interview). We do not claim that an HR report of multiple races is the definitive measure.

12

For interviewer descriptive statistics by each survey year and a comparison of the full tenure distributions for fluid and stable interviewer-years, see Appendix B.

13

Vaisey and Miles (2017) discuss the limitations of fixed effects regressions for causal interpretation. See Appendix C for a discussion of why their critiques are not an overriding concern in our case.

14

Results from our Poisson model are similar when using any racial identification cutoff between 65% and 95% (See Appendix D).

15

Specifically: Inline graphic Where Inline graphic is the observed count row Inline graphic and column Inline graphic, and Inline graphic and Inline graphic are indicators for cell Inline graphic as defined above. Because the full cross-tabulation is relatively sparse, we assess zero inflation by comparing the observed and predicted number of zeros. The predicted number is higher than observed (5,251 and 5,224, respectively) suggesting zero inflation is not an issue. We also test dispersion assumptions of the Poisson model and find the dispersion ratio is statistically indistinguishable from 1 (0.998). Indeed, a negative binomial model fails to converge.

16

Because we cannot identify interviewers across survey years, our results could be influenced by anomalous repeat interviewers who show up in the data as separate interviewer-years. However, we see similar fluidity patterns among interviewers who have short and long NORC tenures (see Appendices A and B). Although repeat interviewers are overrepresented in our data, this is not driving our results.

17

Not including interviewer-year fixed effects could yield false positives under a random measurement-error model if interviewer characteristics were correlated with interviewer’s race. Specifically, our models control for a fixed propensity for interviewers to “misreport” their race within interviewer-years. For further discussion, see Saperstein and Penner (2016).

18

We conclude that our results are not the spurious result of a single unstable estimate because the results are similar across racial classifications and for various subgroups.

19

Results are similar when either Hispanic origin or Hispanic race identification is included alone. The Hispanic identification models do not control for Hispanic origin; all other controls are the same.

20

Interviewer-specific probabilities of identifying as a given race tend to be close to one or zero, and roughly half (54%) of the fitted values from regressions in Table 3 are outside the bounds [0, 1]. Conditional logistic regression models, however, produce coefficients in the same direction with comparable significance (see Appendix A, Table A6).

21

On average, interviewers conducted multiple interviews on 12% of the days they spent interviewing, with an average of 2.1 interviews on those days.

22

256 interviewer-years have at least one change in racial identification, and of the 1,006 with stable racial identification, 31 have fluid Hispanic origin identification, for a total of 287 interviewer-years with at least one change in race or origin out of 1,262 interviewer-years (see Table 1).

23

We count consecutive differing reports as a change in race or origin identification. Without knowing the precise ordering of interviews within a given day, we calculate all possible orderings to break ties and take the average.

24

For comparison, Harris and Sim (2002) find 12% of youth identified inconsistently between surveys conducted several months apart, at school and then at home, Smith and Son (2011) find 8% of adults inconsistently racially identify in the 2006–2008 GSS Panel Survey, and Liebler et al. (2017) find 6.1% of their sample of children and adults was recorded differently in the 2010 census compared to Census 2000.

25

Though we mention Fridays, our approach does not emphasize the last day of the week as not all interviewers work Monday to Friday.

26

NORC brings in a new set of interviewers for the end of the survey year, but these “closers” do not have more fluid racial identifications (see Appendix E).

27

No specific weeks were found to be significant. Among days in the year, only one was significantly associated with identifying as White, four with Hispanic, and four with Some Other Race. (With a 5% false positive rate, about 60 significant coefficients would be expected at random.) Of specific days in the week, only Thursday was significant (relative to Sunday as the reference), and only for Some Other Race. (Here we would expect about 1 significant coefficient at random.) Finally, coefficients for changes over time in identification as White and Some Other Race were small, positive, and significant. A downward trend in reporting both White and American Indian may partially offset the observed transition into the White category (see Appendix F).

28

Results are similar if we instead focus on interviewers who identify as monoracial Black 65% or more of the time (See Appendix D).

29

For linear probability models, coefficients can be interpreted directly as changes in probability.

30

This implies racial mirroring on the part of respondents as well, as they attempt to match the interviewers. Unfortunately, we cannot observe both sides of the process with these data.

31

The GSS principal investigators have been aware of these patterns since at least 2014—when we first contacted them. They continued to collect the post-interview racial identifications and we continued to find similar patterns of fluidity in the data (see Table A4).

32

An and Winship (2017:82) describe their measure as, “a categorical variable, including Whites, Blacks, Hispanics, Asians, and others with two or more races.” Those racial categories are from the HR report.

33

It is worth noting that while racism is typically viewed as a constraint on racial fluidity, the relationship may be more complicated. It is possible, even probable given histories of “passing”, that structural racism makes fluidity more common than it would otherwise be by increasing the stakes of racial identification.

Contributor Information

Robert E M Pickett, New York University, New York, NY, USA.

Aliya Saperstein, Stanford University, Stanford, CA, USA.

Andrew M Penner, University of California, Irvine, CA, USA.

Acknowledgements

We would like to thank Tom Smith and Jeremy Freese for discussions about the data, as well as Paul Chung, David Harding, Mike Hout, Cecilia Ridgeway, the NYU Race and Ethnicity Working Group, and the UC Berkeley Race, Ethnicity, and Inequality Working Group for their feedback. Earlier drafts of this paper were presented at the 2017 American Sociological Association Annual Meeting and the 2016 Population Association of America Annual Meeting.

Funding

This research was supported by an NICHD pre-doctoral training grant in Demography (T32-HD007275), the Russell Sage Foundation, and the Stanford Institute for Research in the Social Sciences (IRiSS).

Data Availability

All data for this paper come from the General Social Survey and are publicly available at https://gssdataexplorer.norc.org/ (Smith et al. 2018).

About the Authors

Robert Pickett is a postdoctoral research associate at New York University. His research explores the interplay between quantitative methods and constructionist theory.

Aliya Saperstein is an associate professor of sociology at Stanford University. Her research examines how categories of difference are operationalized and the consequences of those decisions for studies of stratification.

Andrew M. Penner is a professor of sociology at the University of California, Irvine. Penner’s research focuses on inequality, social categorization, and educational policy.

References

  1. Alba, Richard D., Scarlett  Lindeman, and Noura E.  Insolera. 2016. “Is Race Really So Fluid? Revisiting Saperstein and Penner’s Empirical Claims.” American Journal of Sociology  122(1):247–62. 10.1086/687375. [DOI] [PubMed] [Google Scholar]
  2. Ambady, Nalini, and Max  Weisbuch. 2010. “Nonverbal Behavior” Pp. 464–497 in Handbook of Social Psychology, edited by Susan T.  Fiske, Daniel T.  Gilbert, and Gardner  Lindzey. New Jersey: John Wiley & Sons, Inc., 10.1002/9780470561119.socpsy001013. [DOI] [Google Scholar]
  3. American Sociological Association . 2003. The Importance of Collecting Data and Doing Social Scientific Research on Race. Washington, DC: American Sociological Association. [Google Scholar]
  4. An, Weihua, and Christopher  Winship. 2017. “Causal Inference in Panel Data with Application to Estimating Race-of-Interviewer Effects in the General Social Survey.” Sociological Methods & Research  46(1):68–102. 10.1177/0049124115600614. [DOI] [Google Scholar]
  5. Antaki, Charles, and Sue  Widdicombe. 2008. “Identity as an Achievement and as a Tool.” Pp. 1–14 in Identities in Talk edited by Charles  Antaki, and Sue  Widdicombe. London: SAGE Publications, 10.4135/9781446216958.n1. [DOI] [Google Scholar]
  6. Ashmore, Richard, Kay  Deaux, and Tracy  McLaughlin-Volpe. 2005. “An Organizing Framework for Collective Identity: Articulation and Significance of Multidimensionality.” Psychological Bulletin.  130(1):80–114. [DOI] [PubMed] [Google Scholar]
  7. Bailey, Zinzi D., Nancy  Krieger, Madina  Agénor, Jasmine  Graves, Natalia  Linos, and Mary T.  Bassett. 2017. “Structural Racism and Health Inequities in the USA: Evidence and Interventions.” The Lancet.  389(10077):1453–63. 10.1016/S0140-6736(17)30569-X. [DOI] [PubMed] [Google Scholar]
  8. Blumer, Herbert. 1969. Symbolic Interactionism: Perspective and Method. Englewood Cliffs, NJ: Prentice Hall. [Google Scholar]
  9. Boda, Zsófia. 2019. “Friendship Bias in Ethnic Categorization.” European Sociological Review.  35(4):567–81. 10.1093/esr/jcz019. [DOI] [Google Scholar]
  10. Botvinick, Matthew, Amishi P.  Jha, Lauren M.  Bylsma, Sara A.  Fabian, Patricia E.  Solomon, and Kenneth M.  Prkachin. 2005. “Viewing Facial Expressions of Pain Engages Cortical Areas Involved in the Direct Experience of Pain.” NeuroImage  25(1):312–9. 10.1016/j.neuroimage.2004.11.043. [DOI] [PubMed] [Google Scholar]
  11. Bratter, Jenifer. 2007. “Will ‘Multiracial’ Survive to the Next Generation?: The Racial Classification of Children of Multiracial Parents.” Social Forces  86(2):821–49. 10.1093/sf/86.2.821. [DOI] [Google Scholar]
  12. Bratter, Jenifer, and Heather A.  O’Connell. 2017. “Multiracial Identities, Single Race History: Contemporary Consequences of Historical Race and Marriage Laws for Racial Classification.” Social Science Research.  68(2017):102–16. 10.1016/j.ssresearch.2017.08.010. [DOI] [PubMed] [Google Scholar]
  13. Brown, J. Scott, Steven  Hitlin, and Glen H.  Elder Jr. 2006. “The Greater Complexity of Lived Race: An Extension of Harris and Sim.” Social Science Quarterly.  87(2):411–31. 10.1111/j.1540-6237.2006.00388.x. [DOI] [Google Scholar]
  14. Brubaker, Rogers, and Frederick  Cooper. 2000. “Beyond ‘Identity.” Theory and Society.  29(1):1–47. 10.1023/A:1007068714468. [DOI] [Google Scholar]
  15. Brubaker, Rogers, Mara  Loveman, and Peter  Stamatov. 2004. “Ethnicity as cognition.” Theory and Society.  33(1):31–64. 10.1023/B:RYSO.0000021405.18890.63. [DOI] [Google Scholar]
  16. Chang, Robert S.. 1999. Disoriented: Asian Americans, Law, and the Nation-State. New York: New York University Press. [Google Scholar]
  17. Day, Dennis. 2008. “Being Ascribed, and Resisting, Membership of an Ethnic Group.” Pp. 151–170 in Identities in Talk edited by Charles  Antaki, and Sue  Widdicombe. London: SAGE Publications, 10.4135/9781446216958.n10. [DOI] [Google Scholar]
  18. del  Pinal, Jorge, and Dianne  Schmidley. 2005. Matched Race and Hispanic Origin Responses from Census 2000 and Current Population Survey February to May 2000 Population Division Working Paper No. 79. Washington, DC: US Census Bureau. [Google Scholar]
  19. DiMaggio, Paul. 1997. “Culture and Cognition.” Annual Review of Sociology.  23:263–87. 10.1146/annurev.soc.23.1.263. [DOI] [Google Scholar]
  20. Doyle, Jamie M., and Grace  Kao. 2007. “Are Racial Identities of Multiracials Stable? Changing Self-Identification Among Single and Multiple Race Individuals.” Social Psychology Quarterly.  70(4):405–23. 10.1177/019027250707000409. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Freeman, Johnathan B., Yina  Ma, Maria  Barth, Steven G.  Young, Shihui  Han, and Nalini  Ambady. 2015. “The Neural Basis of Contextual Influences on Face Categorization.” Cerebral Cortex  25(2):415–22. 10.1093/cercor/bht238. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Garbarski, Dana, Nora Cate  Schaeffer, and Jennifer  Dykema. 2016. “Interviewing Practices, Conversational Practices, and Rapport: Responsiveness and Engagement in the Standardized Survey Interview.” Sociological Methodology.  46(1):1–38. 10.1177/0081175016637890. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Harris, David R., and Jeremiah J.  Sim. 2002. “Who is Multiracial? Assessing the Complexity of Lived Race.” American Sociological Review  67:614–27. 10.2307/3088948. [DOI] [Google Scholar]
  24. Jiménez, Tomás R.. 2010. “Affiliative Ethnic Identity: A More Elastic Link Between Ethnic Ancestry and Culture.” Ethnic and Racial Studies.  33(10):1756–75. 10.1080/01419871003678551. [DOI] [Google Scholar]
  25. Kang, Sonia K., Katy  Decelles, András  Tilcsik, and Sora  Jun. 2016. “Whitened Resumes: Race and Self-Presentation in the Labor Market.” Administrative Science Quarterly.  61(3):469–502. 10.1177/0001839216639577. [DOI] [Google Scholar]
  26. Krysan, Maria, and Mick P.  Couper. 2003. “Race in the Live and the Virtual Interview: Racial Deference, Social Desirability, and Activation Effects in Attitude Surveys.” Social Psychology Quarterly  66(4):364–83. 10.2307/1519835. [DOI] [Google Scholar]
  27. Leenders, Roger. 1997. "Longitudinal Behavior of Network Structure and Actor Attributes: Modeling Interdependence of Contagion and Selection." Pp. 165–184 in Evolution of Social Networks. New York: Routledge. [Google Scholar]
  28. Leszczensky, Lars, and Sebastian  Pink. 2019. “What Drives Ethnic Homophily? A Relational Approach on How Ethnic Identification Moderates Preferences for Same-Ethnic Friends.” American Sociological Review.  84(3):394–419. 10.1177/0003122419846849. [DOI] [Google Scholar]
  29. Liebler, Carolyn A., Sonya R.  Porter, Leticia E.  Fernandez, James M.  Noon, and Sharon R.  Ennis. 2017. “America’s Churning Races: Race and Ethnic Response Changes between Census 2000 and the 2010 Census.” Demography  54(1):259–84. 10.1007/s13524-016-0544-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Loveman, Mara. 2014. National Colors: Racial Classification and the State in Latin America .  New York: Oxford University Press, 10.1093/acprof:oso/9780199337354.001.0001. [DOI] [Google Scholar]
  31. Loveman, Mara, and Jeronimo O.  Muniz. 2007. “How Puerto Rico Became White: Boundary Dynamics and Intercensus Racial Reclassification.” American Sociological Review  72(6):915–39. 10.1177/000312240707200604. [DOI] [Google Scholar]
  32. Loveman, Mara, Jeronimo O.  Muniz, and Stanley R.  Bailey. 2012. “Brazil in Black and White? Race Categories, the Census, and the Study of Inequality.” Ethnic and Racial Studies  35(8):1466–83. 10.1080/01419870.2011.607503. [DOI] [Google Scholar]
  33. Maynard, Douglas W., Hanneke  Houtkoop-Steenstra, Nora Cate  Schaeffer, and Johannes  van der  Zouwen. 2002. Standardization and Tacit Knowledge: Interaction and Practice in the Survey Interview. New York: John  Wiley & Sons. [Google Scholar]
  34. Melamed, David, Brent  Simpson, Ashely  Harrell, Christopher W.  Munn, Jared Z.  Abernathy, and Matthew  Sweitzer. 2020. “Homophily and Segregation in Cooperative Networks.” American Journal of Sociology  125(4):1084–127. 10.1086/708142. [DOI] [Google Scholar]
  35. Morning, Ann. 2018. “Kaleidoscope: Contested Identities and New Forms of Race Membership.” Ethnic and Racial Studies.  41(6):1055–73. 10.1080/01419870.2018.1415456. [DOI] [Google Scholar]
  36. O’Brien, Eileen. 2011. “The Transformation of the Role of ‘Race’ in the Qualitative Interview: Not if Race Matters, but How?” In Rethinking Race and Ethnicity in Research Methods, edited by Stanfield II, John H., pp. 67–95. Walnut Creek, CA: Left Coast Press. [Google Scholar]
  37. Okamura, Johnathan. 1981. “Situational Ethnicity.” Ethnic and Racial Studies.  4(4):452–65. 10.1080/01419870.1981.9993351. [DOI] [Google Scholar]
  38. Omi, Michael, and Howard  Winant. 1994. Racial Formation in the United States: From the 1960s to the 1990s. New York: Routledge. [Google Scholar]
  39. Pardo, Jennifer S.. 2006. “On Phonetic Convergence During Conversational Interaction.” The Journal of the Acoustical Society of America.  119(4):2382–93. 10.1121/1.2178720. [DOI] [PubMed] [Google Scholar]
  40. Parker, Kim, Juliana Menasce  Horowitz, Rich  Morin, and Mark Hugo  Lopez. 2015. “Chapter 7: The Many Dimensions of Hispanic Racial Identity.” in Multiracial in America , Washington D.C.: Pew Research Center.  Accessed September 26, 2022. https://www.pewresearch.org/social-trends/2015/06/11/multiracial-in-america/ [Google Scholar]
  41. Pickering, Martin J., and Simon  Garrod. 2004. “Toward a Mechanistic Psychology of Dialogue.” Behavioral and Brain Sciences.  27(2):169–226. 10.1017/S0140525X04000056. [DOI] [PubMed] [Google Scholar]
  42. Renfrow, Daniel G.. 2004. “A Cartography of Passing in Everyday Life.” Symbolic Interaction.  27(4):485–506. 10.1525/si.2004.27.4.485. [DOI] [Google Scholar]
  43. Richeson, Jennifer A., and Samuel R.  Somers. 2016. “Toward a Social Psychology of Race and Race Relations for the Twenty-First Century.” Annual Review of Psychology.  67:439–63. 10.1146/annurev-psych-010213-115115. [DOI] [PubMed] [Google Scholar]
  44. Ridgeway, Cecilia L.. 2009. “Framed Before We Know It, How Gender Shapes Social Relations.” Gender and Society.  23(2):145–60. 10.1177/0891243208330313. [DOI] [Google Scholar]
  45. Roth, Wendy D.. 2012. Race Migrations. Latinos and the Cultural Transformation of Race .  California: Stanford University Press, 10.1515/9780804782531. [DOI] [Google Scholar]
  46. Roth, Wendy D.. 2016. “The Multiple Dimensions of Race.” Ethnic and Racial Studies.  39(8):1310–38. 10.1080/01419870.2016.1140793. [DOI] [Google Scholar]
  47. Rucker, Julian M., and Jennifer A.  Richeson. 2021. “Toward an Understanding of Structural Racism: Implications for Criminal Justice.” Science  374(6565):286–90. 10.1126/science.abj7779. [DOI] [PubMed] [Google Scholar]
  48. Saperstein, Aliya, and Andrew M.  Penner. 2012. “Racial Fluidity and Inequality in the United States.” American Journal of Sociology  118(3):676–727. 10.1086/667722. [DOI] [Google Scholar]
  49. Saperstein, Aliya, and Andrew M.  Penner. 2016. “Still Searching for a True Race? Reply to Kramer et al. and Alba et al.” American Journal of Sociology  122(1):263–85. 10.1086/687806. [DOI] [PubMed] [Google Scholar]
  50. Saperstein, Aliya, Andrew M.  Penner, and Ryan  Light. 2013. “Racial Formation in Perspective: Connecting Individuals, Institutions, and Power Relations.” Annual Review of Sociology  39:359–78. 10.1146/annurev-soc-071312-145639. [DOI] [Google Scholar]
  51. Sewell, William H.  Jr.. 1992. “A Theory of Structure: Duality, Agency, and Transformation.” American Journal of Sociology  98(1):1–29. 10.1086/229967. [DOI] [Google Scholar]
  52. Smith, Tom W., Michael  Davern, Jeremy  Freese, and Stephen  Morgan. 2018. General Social Surveys, 1972–2018 [machine-readable data file] /Principal Investigator, Smith, Tom W.; Co-Principal Investigators, Michael Davern, Jeremy Freese, and Stephen Morgan; Sponsored by National Science Foundation. --NORC ed. Chicago:  NORC at the University of Chicago [producer and distributor]. Data accessed from the GSS Data Explorer website at gssdataexplorer.norc.org: NORC. [Google Scholar]
  53. Smith, Tom, and Jaesok  Son. 2011. “An Analysis of Panel Attrition and Panel Change on the 2006–2008 General Social Survey Panel.” GSS Methodological Report No. 118. Chicago: NORC. Accessed October 13, 2022. http://gss.norc.org/Documents/reports/methodological-reports/MR118.pdf [Google Scholar]
  54. Steglich, Christian, Tom A.B.  Snijders, and Michael  Pearson. 2010. “Dynamic Networks and Behavior: Separating Selection from Influence.” Sociological Methodology.  40(1):329–93. 10.1111/j.1467-9531.2010.01225.x. [DOI] [Google Scholar]
  55. Stets, Jan E.. 2006. “Identity Theory”. In Contemporary Social Psychological Theories, edited by Burke, Peter J., pp. 88–110: California: Stanford University Press. [Google Scholar]
  56. Sun, Hanyu, Frederick G.  Conrad, and Frauke  Kreuter. 2021. “The Relationship Between Interviewer-Respondent Rapport and Data Quality.” Journal of Survey Statistics and Methodology.  9(3):429–48. 10.1093/jssam/smz043. [DOI] [Google Scholar]
  57. Tavory, Iddo, and Gary Alan  Fine. 2020. “Disruption and the Theory of the Interaction Order.” Theory and Society.  49:365–85. 10.1007/s11186-020-09384-3. [DOI] [Google Scholar]
  58. Vaisey, Stephen, and Andrew  Miles. 2017. “What You Can – and Can’t – Do with Three-Wave Panel Data.” Sociological Methods & Research.  46(1):44–67. 10.1177/0049124114547769. [DOI] [Google Scholar]
  59. Vargas, Nicholas, and Jared  Kingsbury. 2016. “Racial Identity Contestation: Mapping and Measuring Racial Boundaries.” Sociology Compass.  10(8):718–29. 10.1111/soc4.12395. [DOI] [Google Scholar]
  60. West, Brady T., and Annelies G.  Blom. 2017. “Explaining Interviewer Effects: A Research Synthesis.” Journal of survey statistics and methodology.  5(2):175–211. [Google Scholar]
  61. Whitehead, Elen M., Allan  Farrell, and Jenifer L.  Bratter. 2021. “IF YOU DON’T KNOW ME BY NOW…”: Assessing Race Mismatch through Differences in Race Reports of Fathers in the Fragile Families Survey.” Du Bois Review: Social Science Research on Race  18(2):365–92. 10.1017/S1742058X21000084. [DOI] [Google Scholar]
  62. Wilkinson, Sue. 2011. “Constructing Ethnicity Statistics in Talk-in-Interaction: Producing the 'White European'.” Discourse & Society  22(3):343–61. 10.1177/0957926510395446. [DOI] [Google Scholar]
  63. Wimmer, Andreas. 2012. Ethnic Boundary Making: Institutions, Power. Networks: Oxford University Press. [Google Scholar]
  64. Wong, Jaclyn S., and Andrew M.  Penner. 2018. “Better Together? Interracial Relationships and Depressive Symptoms.” Socius  4:237802311881461–11. 10.1177/2378023118814610. [DOI] [Google Scholar]
  65. Zuberi, Tukufu. 2001. Thicker than Blood. How Racial Statistics Lie. Minneapolis: University of Minnesota Press. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendices_soac115
replication_code_soac115

Data Availability Statement

All data for this paper come from the General Social Survey and are publicly available at https://gssdataexplorer.norc.org/ (Smith et al. 2018).


Articles from Social Forces; a Scientific Medium of Social Study and Interpretation are provided here courtesy of Oxford University Press

RESOURCES