The gap between research and policy has consistently impeded the use of science in public policy, and has limited the benefit that policy might have on the scientific process. One reason for this gap is that researchers and policymakers abide by different norms that might make discussion and collaboration difficult. As technology use among policymakers has increased in recent years, emails provide a simple way to for scientists to engage with policymakers. However, there is limited recent evidence about how legislators engage with their email in general and even less about engagement with science-based information. In particular, evolving science around the coronavirus pandemic and racial disparities have made it all more important to have timely communication with policymakers. To best allocate scarce science communication resources, it is critical to understand which legislators are likely to engage with science information via email and equally important to know who is not being reached.
The Policymaker-Scientist Divide
Science communication has been critical to mitigating the effects of the COVID-19 pandemic, especially information about COVID-19 transmission rates, current local infection and mortality rates, and local public health orders and guidelines. The $2 trillion CARES Act allocated $1.25 billion for federal research (Mervis, 2020), which should help inform future policy. Unfortunately, the perceived divide between scientists and policymakers has increased in recent years (Oliver et al., 2014) and was further exacerbated by the COVID-19 pandemic (Viglione, 2020). This gap lends both policy and research less impactful: policies have limited benefits to society, because they are not informed by science, and research has a narrower reach on society, because research does not enter the policy sphere and cannot inform wide-scale improvements. Indeed, policies that cite research evidence are more likely to pass committee and become enacted (Scott et al., 2019) and boost voters’ confidence in democracy (Dionne, 2004). Thus, the disconnect between researchers and policymakers has a tangible impact on policy language, policy success, and research quality.
There are several reasons that this gap exists, many of which relate to differing policymaker and researcher norms. For example, one potential reason is that policymakers see academic work as slow, disengaged from the public, and detached from tangible needs (Caplan, 1979; Choi, 2005). While academics dedicate months and years to decades to study a topic, policymakers often have only a few weeks to put together a policy. Another dividing issue is with respect to certainty. Academics recognize the limitations in studies, the caveats in findings, and the general uncertainty of a theory. Policymakers must translate the complex literature into useable information—while simultaneously considering the caveats and the inherently evolving nature of science. These diverging norms between the science and policy communities have limited both the quantity and quality of collaboration.
Recent models that emphasize enduring relationships have shown to effectively narrow this research-policy gap (Crowley et al 2021), but they are labor and resource intensive, making it more difficult to implement at-scale at the state level (Crowley et al 2018). Email campaigns, however, provide a cost-effective and feasible way to distribute science-based information to thousands of legislators across a nation. Accessing the science may also serve as a foot in the door for the enduring relationships shown to increase use of research evidence (URE) in public policy (Crowley et al 2021). The increasing use of technology and online platforms should connect governments and citizens (Nash et al., 2017), but instead may be contributing to information overload. While information overload was prevalent prior to the pandemic (Bawden & Robinson, 2009), it has undoubtedly been exacerbated by recent sociopolitical conditions (Hong & Kim, 2020), making it less likely that policymakers open emails from unknown senders (Calfano, 2019).
There is little research on demographic differences in accessing research evidence, and it is difficult to hypothesize a specific direction of these differences given the complex nature of supply and demand. For example, those with higher educational attainment may be more likely to see the value in research and therefore access it more frequently. It could be, however, that they already have preferred sources or feel sufficiently equipped with their prior knowledge. A similarly unpredictable relationship may exist even for legislators who talk about a research topic (Scott et al., under review). Legislators who already produce bills or public statements with URE may access new materials more frequently in order to support future endeavors, or they may already have preferred sources and therefore choose not to access new material that is sent to them. The field would benefit from greater understanding of these relationships.
Legislator Email Behavior
The pandemic and recent racial justice movements have affected decision-makers at all levels of government but may impact individual legislators to different degrees. At the state level, legislators who have staffers are likely to have more capacity to meaningfully engage with their email inboxes. Among policymakers specifically, institutional and personal factors contribute to a divide in technology receptiveness and proficiency (McNutt, 2014; van Deuresen et al., 2018). Most research on legislator email behavior comes from self-report surveys, which is biased both in who responds and how they respond (Grimm, 2010). The most recent literature is from the early 2000s, so it is unclear whether those results would hold in the present media/technology environment. The limited and perhaps outdated literature that exists suggests that, like the general public, the degree to which legislators use email is a function of their resources (Cooper, 2002a), though all use email at least a minimal amount. In 2005, email was a particularly popular medium for communication and was rated to be as important as telephone calls and meetings (Chen, 2005). However, policymakers differed in perceived importance of emails generally (Cooper, 2004) and constituent emails specifically (Richardson & Cooper, 2006). Specifically, Cooper (2002a, 2002b, 2004) led a series of studies surveying legislators from California, Georgia, and Iowa about their internet use and media tactics. A follow-up survey expanded the number of states from three to eight and explored perceptions of email (Richardson & Cooper, 2006). Californian legislators, deemed as part of a professional legislature (i.e., full-time and salaried legislators with staff), reported higher internet use (Cooper, 2004) and email checking (Cooper, 2002b) than Georgian and Iowan legislators, which are deemed less professional (i.e., part-time legislators or without staff). Cooper (2002b) further found no difference in email checking by party or leadership position.
Gender differences within these constructs are more complicated. Gender was not associated with the use of media tactics (Cooper, 2002a) or internet use broadly (Cooper, 2004) but was associated with email checking frequency, such that male legislators reported checking their email more often than female legislators did (Cooper, 2002b). At that time, the finding was said to be “consistent with trends in Internet usage” (p. 131). Just a few years later, however, it was found that female legislators viewed email more positively than male legislators (though there was no comment on frequency; Richardson & Cooper, 2006). Attitude (positivity) and behavior (frequency) are not interchangeable, which may explain this discrepancy. Nearly two decades later, the digital divide in the United States is no longer based on gender (Blank, 2017; Tsetsi & Rains, 2017; Walker et al., 2019). However, gender gaps in email use and response within legislatures may still persist. But perhaps the gap is in the opposite direction than originally found, such that females are more engaged than males. Female legislators may pay more attention to constituent concerns than males because female legislators are less likely to be in positions of power within legislatures (Erikson & Josefsson, 2021) and thus may have more time to dedicate to email. Further, female legislators may need to appear more responsive to constituent services than males for the sake of their re-election due to gender norms; females are expected to be more caring and do more service (Hochschild, 2012), and they face more criticism when defying gender norms, compared to males (Everitt et al., 2016; Rudman & Phelan, 2008; Templin, 1999).
This series of studies also explored how age and tenure in the legislature was associated with technology use. Younger legislators reported higher internet use (Cooper, 2004), and viewed email more positively (Richardson & Cooper, 2006), but did not differ in email checking frequency (Cooper, 2002b). However, number of terms served was not associated with perceptions of email (Richardson & Cooper, 2006). This supports a finding from Kedrowski (1996) that legislators who were most willing to use new media were younger but not necessarily junior. Additional research has shown that citizens differ in their online political participation such that younger people participate more than older people (Hoffman & Lutz, 2019). Thus, there is reason to expect legislators differ in their management of online constituent services. Together, these findings suggest that age, but not number of terms, may be associated with email behavior. However, this literature has not been advanced in the past decade to reflect the updated norms of email interaction. It is unclear whether this finding would hold in present times, given how much more widespread email use has become. The baseline use of technology may have moved up, such that everyone accesses email to the same degree, but that the differences found previously now only exist for more advanced forms of technology like social media.
One reason the email literature has stagnated is in part due to the increase in social media use and subsequent interest in understanding that use. Accordingly, while there is little recent evidence about legislator email behavior, there is significantly more research on how legislators interact with social media, especially Twitter. For example, Scherpereel et al. (2018) showed that patterns of posting on Twitter differs by policy responsibilities and leadership positions, but not by the institution the legislator serves. Further, Hong et al. (2019) found that legislators who receive less traditional media attention (e.g., junior legislators and those without leadership positions), and particularly those with extreme views, see social media as more beneficial. With less media attention, they may have more time to dedicate to social media and perhaps also email. It is unclear if these behaviors generalize to email behavior.
Although there is limited recent literature on how legislators interact with email, there is more research on how the general public interacts with email. When deciding what emails to attend to, individuals rank importance based on a variety of factors, such as perceived time and effort, number of email recipients, current workload, perceived urgency, and the message sender (Sarrafzadeh et al., 2019). The authority of the sender also matters, such that senders with higher status prompt higher email engagement (Lim et al., 2016). After choosing to open an email once or twice, the most common action is to then delete the email. For emails opened more than twice, other actions (archiving, replying, sorting) become more likely (Alrashed et al., 2018). These actions can differ by demographic groups. For example, when using the search bar to find a specific email, females type longer queries and have longer searching sessions than males, and younger people write shorter queries and search for a shorter time than older people (Carmel et al., 2017). However, it is unclear whether these behaviors will generalize to a policymaker sample, given the unique role email plays in legislatures and how this role has expanded since the last major investigation into email behavior nearly two decades ago (e.g., Cooper, 2002a, 2002b, 2004).
The Present Study
Because past research has largely not focused on legislator interactions with either science information or with their digital communication, researchers would benefit from greater understanding of those interactions, and practitioners would benefit from being able to better utilize the scarce science communication resources (e.g., time, analytic capacity, funding). For instance, policymakers’ urgent need for accurate and up to date scientific information during the COVID-19 pandemic provides an opportunity to study access to research. The present work sought to fill two gaps in the literature. First, it has yet to be shown if the recent findings from layperson email behavior hold in a policymaker sample, given the unique role that email plays in their professional duties specifically. Second, there remains limited research on mass communication of relevant science messages. It is critical to understand, too, who is being reached with any of these strategies. Specifically, we aim to identify patterns of how policymakers access relevant science emails in order to improve our reach with unresponsive policymaker groups. This was done using a novel observational approach to understanding science communication reach with policymakers through email campaigns. Such audience segmentation analyses have been used successfully on legislators using surveys (Purtle et al., 2016; 2018), but we believe this is the first time such methods have been applied to observed legislator behavior. We hypothesize that there will be reliable groups of recipients based on how long it takes them to open emails, and that the demographics of the people in those groups will differ. Based on previous research, we expect that females will be more engaged than males (Cooper, 2002a, 2002b, 2004; Hochschild, 2012), and that there will not be differences by political party (Cooper, 2002b).
Methods
Study design
Five scientific communication campaigns were carried out between May 2020 and August 2020. These trials were stratified by day of the week, and one trial was randomly chosen from each of the five weekdays. Specifics of the message distribution dates, times, and days of the week are presented in Table 1. Each message was sent under the name of the resource’s author, rather than their affiliated organization. Thus, emails likely appeared to be from a constituent. Subject lines communicated that the content was about research. The body of the email was a brief introduction to the author, and the content was followed by a link to the associated resource (i.e., policy brief).
Table 1.
Email Campaign Distribution Date and Time
Trial | Day of week | Date | Time (EDT) |
---|---|---|---|
| |||
1 | Thursday | 05/28/20 | 12:00pm |
2 | Tuesday | 06/09/20 | 4:00pm |
3 | Friday | 06/26/20 | 3:56pm |
4 | Wednesday | 07/01/20 | 1:00pm |
5 | Monday | 08/03/20 | 11:30am |
Participants
Participants were 3,057 state legislators from all 50 states and four territories (see Figure 1) who were recipients of each of the five email trials included in the study. These recipients were selected for being on a committee related to child/family, education, and health topics, as identified by Quorum. These topics were selected to align with the content of the distributions. Demographics of these participants are available in Table 3.
Figure 1.
Heat Map of Recipients in U.S. States.
Table 3.
Recipient Characteristics
Characteristic (n valid) | Percent or M(SD) |
---|---|
| |
Gender (2618) | |
Male | 66.35 |
Female | 33.65 |
Race/ethnicity (2606) | |
Asian/Pacific Islander | 1.69 |
Black/African American | 11.20 |
Latinx | 4.60 |
Native American | 0.69 |
Other | 0.54 |
Two or more | 0.50 |
White | 80.78 |
Higher educational attainment (2618) | |
Undergraduate or none | 66.42 |
Professional degree or doctorate | 33.58 |
Party affiliation (2619) | |
Republican | 53.30 |
Democrat | 46.09 |
Other | 0.61 |
Number of terms (2583) | 3.40 (2.99) |
Age (1515) | 58.24 (12.81) |
URE Bills Sponsored or Cosponsored (2618) | 10.32 (14.08) |
URE Documents (427) | 3.85 (6.31) |
Measures
Demographic Variables
Demographic variables for legislators were obtained through the Quorum database (Quorum, 2021), which collects and aggregates data about federal and state policymakers and was designed specifically for legislative outreach. These metrics included age, gender, terminal degree, race/ethnicity, political affiliation, and number of terms served. Terminal degree was categorized as does not have higher education and does have higher education (undergraduate, professional, or doctorate). Ethnicity was coded as Black, Indigenous, or otherwise Person of Color (BIPOC) and White, due to lack of comparable sample sizes within BIPOC. Demographics are summarized in the results section.
Demonstrated Interest in Research
There were two indices of demonstrated interest in research. One was the number of bills sponsored or cosponsored from January 2020 until September 2020 (including carryover from previous years) that used keywords related to the use of research evidence (URE). The second was the number of public statements published in the same timeframe that used the same URE keywords. Both were obtained using Quorum. The Boolean phrase used has been employed in other work (e.g., Green et al., in prep.) and is available in the Appendix.
Time-to-Open
The primary outcome measure is the time it took for the recipient to open the email the first time. This was the number of hours from the time when the recipients’ server was recorded as having received it to the time of the first open. Opens are tracked by industry-standard image download tracker. This is a one-pixel invisible image that is downloaded when an email is opened. Some email servers and firewall software prevent this one-pixel image from being downloaded. Therefore, if a participant was recorded as not having opened, it may be because a) they indeed did not open the email, or b) the pixel was not downloaded properly.
This Time-to-Open metric was transformed into additional metrics for maximum interpretability. The Time-to-Open variable ranged from zero minutes to two weeks and thus had a large range that introduced noise. To reduce the noise and to correspond to the norm of timeliness in policy (Oliver et al., 2014), we categorized it based on frequency distributions and our own understanding of email responsiveness. This included splitting the Time-to-Open variable into a ranked indicator ranging from one to five, detailed in Table 2. A value of one was given for Times to Open that were less than six minutes (0.10 hours), which theoretically reflects those who check email based on notifications. A value of two was assigned to people who opened between seven minutes and one hour, such as if they finished a task before checking emails. The value of three reflects people who opened between 1.01 and three hours, who may not check emails during meetings or other tasks, but otherwise keeps their inbox read. A value of four was given to people who first opened the email between 3.01 and nine hours, such that they clear their inbox by the end of the day. A value of five was given to those who opened over nine hours after receiving the email, but before data collection had closed two weeks later, corresponding to the industry standard of open rate tracking (Manola, 2019). Those who were not tracked as having opened were marked as missing.
Table 2.
Ranked Hours to Open Variable
Value | Hour range included | Approximate frequency (%) | Who |
---|---|---|---|
| |||
1 | 0.00–0.10 | 10–15 | Checked based on notifications |
2 | 0.11–1.00 | 10–15 | Responsive, but perhaps had a task or had stepped away from computer |
3 | 1.01–3.00 | 10 | Had a long meeting or a big task to complete first, but keeps inbox empty |
4 | 3.01–9.00 | 10 | Gets around to an empty by the end of the workday |
5 | 9.01+ | 10 | Eventually checked |
Missing | No tracked open | 40–50 | Too chaotic of an inbox, or uses firewalls |
This new indicator of Time-to-Open was used to create more meaningful metrics reflective of timeliness over multiple emails. We computed the mean of an individual’s ranked indicator across however many trials they had opened. Means were missing for people who opened zero times. This metric can be interpreted as average timeliness and could range from one to five. We also computed the standard deviation of the ranked indicator for each recipient as an index of variability of email behavior. Variability metrics were missing for people who opened zero or one time, as standard deviation cannot be computed. The final metric was of the stability over time, which was a count of how many Time-to-Open entries were available for the participant, and thus the number of emails the recipient opened. It ranged from zero to five. The purpose of this final index was twofold. First, it serves as an indicator for overall responsiveness as a dichotomous yes/no that does not consider the time until opening. Second, it serves to account for the missing data of the other indices. None of these data were missing, so even people who had never opened (n = 1,616) were included in the present analyses.
Analysis
Because our dependent variables were continuous, we ran a series of latent profile analyses (LPA; Marsh et al., 2009). This analysis identifies the number of latent categories and estimates the characteristics of those categories. We performed a series of LPAs on these three dependent variables: timeliness (mean ranked hours to open), variability (standard deviation), and stability over time (total number of emails opened). The LPAs used bootstrapping and random starts to fit models with two to six specified profiles. Model fit was assessed with Akaike’s Information Criterion (AIC; Akaike, 1998), sample-size adjusted Bayesian Information Criterion (sBIC; Sclove, 1987), and Lo-Mendell-Rubin Adjusted Likelihood Ratio Test (LMRT; Lo et al., 2001) nested model comparison test. The fit statistics and interpretability guided the decision of which profile count was most appropriate (Maibach et al., 2011; Purtle et al., 2018). Individual profiles were interpreted by the estimated means of the dependent variables.
We then investigated predictors of the profile group membership to better understand who is being reached by these email campaigns. To do this, we looked at associations between group membership and legislator-related demographics. Among legislators, demographics are correlated. For example, female legislators may be older, on average, than male legislators because of the gender stereotypes young females are subjected to, affecting their electability (Erikson & Josefsson, 2021; Fountaine, 2017). Gender is then confounded with age. Thus, to avoid interactions for maximum interpretability, we employed several simple multinomial logistic regressions in Stata 13.0, including predictors one at a time, rather than multiple multinomial logistic regressions. We chose to run them this way, rather than with the 3-Step Process within MPlus (Asparouhov & Muthén, 2014) that accounts for uncertainty of an individual’s profile membership classification, because of the near-perfect classification probabilities of the four-profile model and corresponding profile certainty. This is not expected to have significantly affected results.
Results
Patterns of Opening
We performed a series of LPAs to determine the number of profiles that best represent these data. The AIC, sBIC, and LMRT tests all indicated a five-profile model fit best (see Table 4). However, the model with four profiles was more conceptually meaningful and associated with only a slight reduction in entropy (1.00 to 0.977), though both had strong model fit. The three-profile model was similarly interpretable but only confirmed preexisting beliefs about who the groups were (some fast, some slow, and some in between). The four-profile model indicated a novel group and was thus retained as the preferred model. These profiles are classified as follows and are summarized in Table 5. One profile, Never Openers (46.2%), are those who opened quickly (M = 1.51), consistently (M = 0.17), but rarely or never (M = 0.01). The next profile is of the Rare Openers (16.1%), who opened slowly (M = 3.24), inconsistently (M = 2.63), and rarely (M = 1.03). A third profile is labeled Intermittent Openers (18.8%), as they open relatively quickly (M = 2.97) on a consistent basis (M = 0.94), but only opened a few trials overall (M = 2.45). The final profile is Always Openers (18.9%), who open more quickly than Intermittent Openers (M = 2.71) and do so on a consistent basis (M = 1.04). Unlike Intermittent Openers, those in Always Openers opened nearly every trial (M = 4.63).
Table 4.
Model Fit Indices for Latent Profile Analysis Series
Profiles specified | AIC | sBIC | LMRT (p) | Bootstrapped LMRT (p) | Entropy |
---|---|---|---|---|---|
| |||||
2 | 15584 | 15611 | < .0001 | < .0001 | .937 |
3 | 14646 | 14684 | < .0001 | < .0001 | .983 |
4 | 14305 | 14353 | < .0001 | < .0001 | .977 |
5 | 12846 | 12905 | < .0001 | < .0001 | 1.00 |
6 | 13015 | 13085 | .99 | 1.00 | .977 |
Table 5.
Four-Profile Model Interpretation
Profile (Label) | Count (%) | Classification probabilities | Timeliness | Variability | Stability |
---|---|---|---|---|---|
| |||||
1 (Always Openers) | 497 (18.9) | 1.00 | 2.71 | 1.04 | 4.63 |
2 (Intermittent Openers) | 493 (18.8) | .990 | 2.97 | 0.94 | 2.45 |
3 (Never Openers) | 1212 (46.2) | .990 | 1.51 | 0.17 | 0.01 |
4 (Rare Openers) | 423 (16.1) | .986 | 3.24 | 2.63 | 1.03 |
Group Membership
Next, to understand demographic predictors of each profile, we conducted multinomial logistic regressions. Results are summarized in Table 6. Female legislators were 28% more likely than males to be Intermittent Openers, relative to Never Openers (RRR = 1.28, SE = 0.14, z = 2.18, p = .03). Although not statistically significant, females also tended to be 21% more likely than males (RRR = 1.21, SE = 0.14, z = 1.72, p = .085) to be Always Openers. There was no significant difference by gender between Rare Openers and Never Openers, p = .72. Though there was no independent impact of age, there was an impact of number of terms on profile membership. Specifically, for every one-term increase, there was an associated 6% drop in likelihood of being in Always Openers compared to Never Openers, RRR = 0.94, SE = 0.02, z = −3.25, p = .001, and a 4% drop in risk of being in Rare Openers than Never Openers, RRR = 0.96, SE = 0.02, z = −2.11, p = .035. The effect of terms on the comparison between Intermittent Openers and Never Openers was nonsignificant but indicated that those with more terms were more likely to be in the Never Openers profile. Race/ethnicity was associated with profile membership such that BIPOC legislators, relative to White legislators, were 33% less likely to be in the Always Openers group than the Never Openers group (RRR = 0.66, SE = 0.09, z = −2.82, p = .005). Educational attainment, party affiliation (restricted to Democrats and Republicans), number of URE bills sponsored or cosponsored, and number of URE documents were not related to group membership.
Table 6.
Multinomial Logistic Regression Results
Variable | χ2(3) | p |
---|---|---|
| ||
Gender | 7.57 | .06 |
Number of Terms | 13.05 | < .01 |
Race/Ethnicity | 9.07 | .03 |
Age | 0.77 | .86 |
Higher educational attainment (professional or doctorate vs undergraduate or none) | 0.16 | .98 |
Party affiliation (Republican vs Democrat) | 0.22 | .98 |
URE Bills Sponsored or Cosponsored | 5.92 | .12 |
URE Documents | 2.37 | .50 |
Discussion
The present study sought to understand potential profiles of email engagement among legislators, as well as legislators’ demographic characteristics that might predict different profiles of engagement. Results suggested four interpretable and meaningful profiles, including Never Openers, who consistently opened quickly when they did open, but practically never opened; Rare Openers, who opened slowly, inconsistently, and rarely; Intermittent Openers, who open consistently quickly, but only a couple times; and Always Openers, who open consistently quickly every time. These profiles shed light on how legislators interact with their email and go beyond a simple dichotomy of people who open and those that do not. Variability in email engagement might relate to legislators’ time and capacity to check email, as well as legislator priorities and values. Never Openers, for example, may not have enough support staff to be able to dedicate time to email, may receive a higher volume of emails, or may prioritize other roles over constituent services. Always Openers in contrast may be legislators with enough staff, with fewer responsibilities, and/or who specifically prioritize constituent services. It is important to first acknowledge that these findings may speak more to the legislative office, rather than the legislator, as it is common for staff to manage the legislator’s email. Similarly, findings may also speak to capacity of the legislature, rather than the legislator. Though it was not explored in the present work, characteristics of the state such as legislative professionalism may explain some differences in email behavior as a multilevel model. Findings should be interpreted with the understanding that the characteristics may relate to the office or state as much as, or more than, the individual legislator.
The present work found that females, compared to males, were more likely to be in Intermittent Openers than Never Openers, and trended to be more likely in Always Openers than Never Openers. Perhaps this is because gender norms expect females to be more caring than males (Hochschild, 2012). Females, and female legislators specifically, in turn, face greater backlash (Rudman & Phelan, 2008) if they disregard constituent services or other care-related tasks (Everitt et al., 2016; Templin, 1999).
Number of terms served was also associated with profile membership. Recipients who had served for more terms were more likely to be in the Never Openers profile. Perhaps legislators who have served for a longer time feel like they already understand their constituents’ needs and therefore can be less responsive to email. It could also be that these legislators are in more committees or leadership roles that demand more time. In contrast, low-term legislators may have fewer demands on their time that, in turn, may allow them to pay more attention to email. It is unclear what the flexible schedules of low-term legislators might mean for being able to take political action with the science information received, however. While they may prioritize or have more time to check emails, it then follows that they may have more time to draft legislation, but may also be less experienced and efficient in doing so. Alternatively, it could be that they have more time because they do not have the political influence to push forward their own bills at the time (Volden & Wiseman, 2018). Notably, this divide is not based on age but instead the number of terms. This finding suggests the digital divide that existed in the early 2000s based on age (Cooper 2004; Kedrowski, 1996; Richardson & Cooper, 2006) has minimized, at least in email. It could be that the age-based digital divide has just moved from email to newer and less ubiquitous technologies outside the scope of the present study, like social media.
The digital divide seems to have persisted by race/ethnicity. Legislators who identified as BIPOC were more likely to be Never Openers rather than Always Openers, though there was no difference relative to Intermittent Openers or Rare Openers. One explanation for this difference could be because of the “minority tax” (Osseo-Asare et al., 2018) wherein people who are not White are expected to give their time to efforts to achieve diversity, such as a minority caucus. These extra responsibilities may translate into less flexible schedules to, for instance, check emails. In addition to this tax is the mental effort and corresponding time needed to work against the negative pressures present in the minority at predominantly White institutions (termed “intercultural effort”; Dowd et al., 2011). BIPOC legislators may have less time and mental bandwidth to check emails given the dynamics inherent in the present system.
A number of demographics and related predictors included were not predictive of profile membership. Party affiliation (Democrat, Republican) was not associated with profile membership. This finding underscores that research evidence is sought out and used by both major political parties (Haskins & Margolis, 2014; Kosar, 2018). Use of research evidence in bills and documents was also not predictive of profile membership. There are a few potential explanations for this null result. First, perhaps recipients did not expect to receive an email with scientific content based on the email subject line. Next, it could be that those with low URE are motivated to seek science-based information to improve their own knowledge base, while those with high URE already have preferred sources (Purtle et al., 2016). Third, these null results may also be a product of the keyword phrase used to identify URE. Perhaps the phrase was not specific enough to be meaningful; the keywords chosen may have introduced false positives (e.g., “evidence” can refer to research evidence and police evidence) that diluted the fidelity of the metric. A final reason URE may not have been explanatory is if legislators do not prioritize one type of email over another, but this would run contrary to extant research in this area (Scott et al., under review).
Last, educational attainment was not predictive of profile membership. It is unclear why there were no differences. Underlying reasons may include having preferred sources, feeling confident in their own science translation skills, or otherwise not prioritizing research materials based on their own experiences. More work is needed to understand how legislators’ educational background is related to their legislative actions.
Together, findings may help us better allocate scarce science communication resources. First, for issues requiring a timely response, science communicators should likely target junior legislators and female legislators for optimal engagement. Junior legislators may have less legislative or agenda-setting power compared to their senior counterparts, but they remain a critical part of any legislative process. Working with junior legislators may be particularly advantageous: junior legislators are at the genesis of developing their policy agenda and therefore may be the most receptive to building long-term relationships. More work is needed to understand how to improve research access specifically among male legislators and those who have served for more terms. Additional research should examine specific strategies (e.g., words, formats, frames) that enhance access and engagement both within these defined audience segments and overall (Noar et al., 2007; Purtle et al., 2018). It seems that there is a malleable group of legislators who inconsistently engage and for whom different tactics may be necessary. For those who never engage, a different tact altogether could be pursued. For example, those legislators may prefer more in-depth person-to-person contact to build science-policy relationships (e.g., the Research-to-Policy Collaboration model; Crowley et al., 2018; 2021). To generate such research, it is critical to build capacity for science communication strategy testing and science resource distribution (Jensen, 2014). The present work is based on one such testing method and may serve as a framework from which to build (see Long et al., 2021; Scott et al., under review). Improved targeting and messaging of science materials can increase the intended social impact of academic research.
Limitations and Strengths
The present study has several limitations that should be considered. First is the fidelity of the indicator of having opened an email. A large portion of the sample was recorded as never having opened the emails. It could indeed be that they never opened. However, some opens are not tracked because some email servers and firewall software block the tracker. Thus, the opening metrics are likely underestimated and have measurement error. Compounding this measurement error, we did not account for recipients who were out of office or on leave. Autoreplies may have indicated these adjustments, but it was not incorporated into the data. There may also be a clustering effect within states given legislators within a state are likely to use the same email server that may or may not employ a firewall. Future work in this area should consider a multilevel approach to account for this and other state-level influences. A second limitation is that the topics in the subject lines were about COVID-19, children and families, and education issues. These findings may not be generalizable to emails about other topics or with another sample that was not chosen for their work on health and education issues. Third, this work focused on accessing email messages, but not pursuing the research content (i.e., clicks). Access may contribute to future engagement, however: accessing research is the first step in a feedback loop that can turn into a meaningful collaboration which has already been shown to have an impact on URE in legislation (Crowley et al 2021). More work is needed to understand which state legislators actively pursue research material or initiate relationships with researchers. Last, we are operating under the assumption that accessing these research emails is different than accessing a more typical email (e.g., constituent concerns). While we expect legislators to prioritize emails based on interest area as communicated by the subject line (e.g., in Scott et al., under review), it is possible that they do not differentiate between content. They may not differentiate between emails that appear to have been sent by scientists and those that appear to have been sent from constituents. Thus, claims made about what the findings mean for accessing URE may be generalizable to accessing any constituent email.
Despite these limitations, the present study also has a number of strengths. First, while studies of legislative behavior often employ surveys that are susceptible to social desirability biases, the present study uses observational data on email open rates, which are not subject to such biases. Additionally, this study employed a large sample of state legislators—the largest to the knowledge of the authors—with legislators from the majority of the United States. The observational nature of this study contributed to this especially large sample, since it was not limited by response rates. Third, metrics were computed from five separate trials across a range of health issues and were sent months apart for greater generalizability of findings. This study serves to reinvigorate investigation into legislator email behavior. The original studies of legislator email behavior are now dated; email use has increased over the past 20 years and new norms of email behavior have been established.
Conclusion
The present study sheds light on the profiles of email engagement among a large sample of state legislators. It adds to the literature of legislator email behavior by conceptualizing and contextualizing email open times in rank-ordered profiles, and by analyzing demographic predictors of each profile. It appears that email campaigns of evidence-based policy recommendations are effectively reaching a subgroup of legislators. Females and legislators who have served fewer terms appear to be most responsive. Perhaps different tactics are needed to better engage with males and legislators who have served for a longer time. The emergent differences in state legislators’ engagement with science messages can be leveraged to improve science policy and may serve as a foundation for meaningful collaborations.
Appendix
Boolean phrase for use of research evidence in bills and public statements:
(“evidence” OR “research” OR “study” OR “studies” OR “scientific” OR “scientifically” OR “data” OR “empirical” OR “empirically” OR “evidence” OR “evaluation” OR “RCT” OR “randomized controlled trial” OR “QED” OR “quasi-experimental research design” OR “control” OR “comparison” OR “impacts” OR “expert” OR “researcher”) AND (“informed” OR “based” OR “based on” OR “driven” OR “experimental” OR “peer-reviewed” OR “rigorous” OR “randomized” OR “effective” OR “ineffective” OR “promising” OR “statistically significant”) AND (“demonstrate” OR “suggest” OR “found” OR “show” OR “illustrate” OR “replicate”)
Contributor Information
Jessica Pugel, Pennsylvania State University, United States of America.
Elizabeth C. Long, Pennsylvania State University, United States of America
Mary Fernandes, Georgia State University, United States of America.
Katherine Cruz, Public Health, Johns Hopkins University, Baltimore, Maryland, USA..
Cagla Giray, Pennsylvania State University, United States of America.
D. Max Crowley, Pennsylvania State University, United States of America.
Taylor Scott, Pennsylvania State University, United States of America.
References
- Akaike H (1998). Factor analysis and AIC. In Parzen E, Tanabe K, & Kitagawa G (Eds.), Selected papers of Hirotugu Akaike (pp. 371–386). Springer. [Google Scholar]
- Alrashed T, Awadallah AH, & Dumais S (2018). The lifetime of email messages: A large-scale analysis of email revisitation. In Proceedings of the 2018 Conference on Human Information Interaction & Retrieval—CHIIR ’18, presented at the 2018 Conference, ACM Press, New Brunswick, NJ, USA, pp. 120–129. [Google Scholar]
- Asparouhov T, & Muthén B (2014). Auxiliary variables in mixture modeling: Three-step approaches using M plus. Structural Equation Modeling: A Multidisciplinary Journal, 21(3), 329–341. [Google Scholar]
- Bawden D, & Robinson L (2009). The dark side of information: Overload, anxiety and other paradoxes and pathologies. Journal of Information Science, 35(2), 180–191. [Google Scholar]
- Blank G (2017). The digital divide among Twitter users and its implications for social research. Social Science Computer Review, 35(6), 679–697. [Google Scholar]
- Calfano B (2019). Power lines: Unobtrusive assessment of E-mail subject line impact on organization website use. Journal of Political Marketing, Routledge, 18(3), 179–195. [Google Scholar]
- Caplan N (1979). The two-communities theory and knowledge utilization. American Behavioral Scientist, 22(3), 459–470. [Google Scholar]
- Carmel D, Lewin-Eytan L, Libov A, Maarek Y, & Raviv A (2017). The demographics of mail search and their application to query suggestion. In Proceedings of the 26th international conference on world wide web, presented at the WWW ’17: 26th International World Wide Web Conference, International World Wide Web Conferences Steering Committee, Perth Australia, pp. 1541–1549. [Google Scholar]
- Chen EY (2005). Virtual representatives. Journal of E-Government, 2(1), 55–78. [Google Scholar]
- Choi BCK (2005). Can scientists and policy makers work together? Journal of Epidemiology & Community Health, 59(8), 632–637. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cooper CA (2002a). Media tactics in the state legislature. State Politics & Policy Quarterly, 2(4), 353–371. [Google Scholar]
- Cooper CA (2002b). E-Mail in the state legislature: Evidence from three states. State and Local Government Review, 34(2), 127–132. [Google Scholar]
- Crowley M, Scott JTB, & Fishbein D (2018) Translating prevention research for evidence-based policymaking: Results from the research-to-policy collaboration pilot. Prevention Science, 19(2), 260–270. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Crowley DM, Scott JT, Long EC, Green L, Israel A, Supplee L, Jordan E, Oliver K, Guillot-Wright S, Gay B, Storace R, Torres-Mackie N, Murphy Y, Donnay S, Reardanz J, Smith R, McGuire K, Baker E, ..., & Giray C (2021) Lawmakers' use of scientific evidence can be improved. Proceedings of the National Academy of Sciences, 118(9), e2012955118. 10.1073/pnas.2012955118 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cooper CA (2004). Internet use in the state legislature: A research note. Social Science Computer Review, 22(3), 347–354. [Google Scholar]
- Dionne EJ (2004). Why Americans hate politics. Simon and Schuster. [Google Scholar]
- Dowd AC, Sawatzky M, & Korn R (2011). Theoretical foundations and a research agenda to validate measures of intercultural effort. The Review of Higher Education, 35(1), 17–44. [Google Scholar]
- Erikson J, & Josefsson C (2021). Equal playing field? On the intersection between gender and being young in the Swedish Parliament. Politics, Groups, and Identities, 9(1), 81–100. [Google Scholar]
- Everitt J, Best LA, & Gaudet D (2016). Candidate gender, behavioral style, and willingness to vote: Support for female candidates depends on conformity to gender norms. American Behavioral Scientist, 60(14), 1737–1755. [Google Scholar]
- Fountaine S (2017). Whaťs not to like?: A qualitative study of young women politicians’ self-framing on Twitter. Journal of Public Relations Research, 29(5), 219–237. [Google Scholar]
- Grimm P (2010). Social desirability bias. Wiley International Encyclopedia of Marketing. 10.1002/9781444316568.wiem02057 [DOI] [Google Scholar]
- Haskins R, & Margolis G (2014). Show me the evidence: Obama's fight for rigor and results in social policy. Brookings Institution Press. [Google Scholar]
- Hochschild AR (2012). The managed heart: Commercialization of human feeling. University of California Press. [Google Scholar]
- Hoffmann CP & Lutz C (2019). Digital divides in political participation: The mediating role of social media self-efficacy and privacy concerns. Policy & Internet, 13(1), 1–24. [Google Scholar]
- Hong H, & Kim HJ (2020). Antecedents and consequences of information overload in the COVID-19 pandemic. International Journal of Environmental Research and Public Health, 17(24), 9305. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hong S, Choi H, & Kim TK (2019). Why do politicians tweet? Extremists, underdogs, and opposing parties as political tweeters. Policy & Internet, 11(3), 305–323. [Google Scholar]
- Jensen E (2014). The problems with science communication evaluation. Journal of Science Communication, 13(01), C04. [Google Scholar]
- Kedrowski KM (1996). Media entrepreneurs and the media enterprise in the U.S. Congress. Hampton Press Cresskill. [Google Scholar]
- Kosar KR (2018). The atrophying of the congressional research service's role in supporting committee oversight. Wayne Law Review, 64, 14. [Google Scholar]
- Lim KH, Lim E-P, Jiang B, & Achananuparp P (2016). Using online controlled experiments to examine authority effects on user behavior in email campaigns. In Proceedings of the 27th ACM conference on hypertext and social media—HT ’16, presented at the 27th ACM Conference, ACM Press, Halifax, Nova Scotia, Canada, pp. 255–260. [Google Scholar]
- Lo Y, Mendell NR, & Rubin DB (2001). Testing the number of components in a normal mixture. Biometrika, 88(3), 767–778. [Google Scholar]
- Long EC, Pugel J, Scott JT, Charlot N, Giray C, Fernandes MA, & Crowley DM (2021) Rapid-cycle experimentation with state and federal policymakers for optimizing the reach of racial equity research. American Journal of Public Health, 111(10), 1768–1771. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Maibach EW, Leiserowitz A, Roser-Renouf C, & Mertz CK (2011). Identifying like-minded audiences for global warming public engagement campaigns: An audience segmentation analysis and tool development. PLOS ONE, 6(3), e17571. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Manola S (2019, July 17). The best times to send marketing emails. AB Tasty. https://www.abtasty.com/blog/times-send-marketing-emails/ [Google Scholar]
- Marsh HW, Lüdtke O, Trautwein U, & Morin AJS (2009). Classical latent profile analysis of academic self-concept dimensions: Synergy of person- and variable-centered approaches to theoretical models of self-concept. Structural Equation Modeling: A Multidisciplinary Journal, 16(2), 191–225. [Google Scholar]
- McNutt J (2014). Social networking and constituent relationships at the state level: Connecting government to citizens in a time of crisis. Working Paper. School of Public Policy and Administration, University of Delaware, Newark, DE. http://udspace.udel.edu/handle/19716/13065 [Google Scholar]
- Mervis J (2020). Massive U.S. coronavirus stimulus includes research dollars and some aid to universities. Science| AAAS. https://www.sciencemag.org/news/2020/03/massive-us-coronavirus-stimulus-includes-research-dollars-and-some-aid-universities [Google Scholar]
- Nash V, Bright J, Margetts H, & Lehdonvirta V (2017). Public policy in the platform, society. Policy & Internet, 9(4), 368–373. [Google Scholar]
- Noar SM, Benac CN, & Harris MS (2007). Does tailoring matter? Meta-analytic review of tailored print health behavior change interventions. Psychological Bulletin, 133(4), 673–693. [DOI] [PubMed] [Google Scholar]
- Oliver K, Innvar S, Lorenc T, Woodman J, & Thomas J (2014). A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Services Research, 14(1), 2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Osseo-Asare A, Balasuriya L, Huot SJ, Keene D, Berg D, Nunez-Smith M, Genao I, Latimore D, & Boatright D (2018). Minority resident physicians’ views on the role of race/ethnicity in their training experiences in the workplace. JAMA Network Open, 1(5), e182723. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Purtle J, Dodson EA, & Brownson RC (2016). Uses of research evidence by state legislators who prioritize behavioral health issues. Psychiatric Services, 67(12), 1355–1361. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Purtle J, Lê-Scherban F, Wang X, Shattuck PT, Proctor EK, & Brownson RC (2018). Audience segmentation to disseminate behavioral health evidence to legislators: An empirical clustering analysis. Implementation Science, 13(1), 121. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Quorum. (2021). Best-in-class public affairs software. https://quorum.us/
- Richardson LE, & Cooper CA (2006). E-mail communication and the policy process in the state legislature. Policy Studies Journal, 34(1), 113–129. [Google Scholar]
- Rudman LA, & Phelan JE (2008). Backlash effects for disconfirming gender stereotypes in organizations. Research in Organizational Behavior, 28, 61–79. [Google Scholar]
- Sarrafzadeh B, Hassan Awadallah A, Lin CH, Lee C-J, Shokouhi M, & Dumais ST (2019). Characterizing and predicting email deferral behavior. In Proceedings of the twelfth ACM international conference on web search and data mining. Association for Computing Machinery, New York, NY, USA, pp. 627–635. [Google Scholar]
- Scherpereel JA, Wohlgemuth J, & Lievens A (2018). Does institutional setting affect legislators’ use of Twitter? Policy & Internet, 10(1), 43–60. [Google Scholar]
- Sclove SL (1987). Application of model-selection criteria to some problems in multivariate analysis. Psychometrika, 52(3), 333–343. [Google Scholar]
- Scott JT, Ingram AM, Nemer SL & Crowley DM (2019). Evidence-based human trafficking policy: Opportunities to invest in trauma-informed strategies. American Journal of Community Psychology, 64(3–4), 348–358. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scott JT, Pugel J, Fernandes MA, Cruz K, Long EC, Giray C, Storace R, & Crowley DM (under review). Cutting through the noise during crisis by enhancing the relevance of research to policymakers. [Google Scholar]
- Templin C (1999). Hillary Clinton as threat to gender norms: Cartoon images of the first lady. Journal of Communication Inquiry, 23(1), 20–36. [Google Scholar]
- Tsetsi E, & Rains SA (2017). Smartphone Internet access and use: Extending the digital divide and usage gap. Mobile Media & Communication, 5(3), 239–255. [Google Scholar]
- van Deursen AJAM, & Mossberger K (2018). Any thing for anyone? A new digital divide in internet-of-things skills. Policy & Internet, 10(2), 122–140. [Google Scholar]
- Viglione G (2020). Four ways Trump has meddled in pandemic science—And why it matters. Nature. 10.1038/d41586-020-03035-4 [DOI] [PubMed] [Google Scholar]
- Volden C, & Wiseman AE (2018). Legislative effectiveness in the United States Senate. The Journal of Politics, 80(2), 731–735. [Google Scholar]
- Walker DM, Hefner JL, Fareed N, Huerta TR, & McAlearney AS (2019). Exploring the digital divide: Age and race disparities in use of an inpatient portal. Telemedicine and E-Health, 26(5), 603–613. [DOI] [PMC free article] [PubMed] [Google Scholar]