Abstract
What solutions can we find in the research literature for preventing sexual violence, and what psychological theories have guided these efforts? We gather all primary prevention efforts to reduce sexual violence from 1985 to 2018 and provide a bird’s-eye view of the literature. We first review predominant theoretical approaches to sexual-violence perpetration prevention by highlighting three interventions that exemplify the zeitgeist of primary prevention efforts at various points during this time period. We find a throughline in primary prevention interventions: They aim to change attitudes, beliefs, and knowledge (i.e., ideas) to reduce sexual-violence perpetration and victimization. Our meta-analysis of these studies tests the efficacy of this approach directly and finds that although many interventions are successful at changing ideas, behavior change does not follow. There is little to no relationship between changing attitudes, beliefs, and knowledge and reducing victimization or perpetration. We also observe trends over time, including a shift from targeting a reduction in perpetration to targeting an increase in bystander intervention. We conclude by highlighting promising new strategies for measuring victimization and perpetration and calling for interventions that are informed by theories of behavior change and that center sexually violent behavior as the key outcome of interest.
Keywords: primary prevention, sexual violence, meta-analysis, intervention, randomized controlled trial, behavior change
This article presents an intellectual history and quantitative meta-analysis of primary prevention efforts to reduce sexual violence. Sexual violence is a major public-health problem in the United States and across the globe (Basile et al., 2022; Center for Disease Control, 2022). Accordingly, significant efforts and resources are aimed at primary prevention strategies, which are intended to prevent violence before it occurs. The search for theoretically informed interventions to reduce sexual violence spans multiple academic disciplines, including public health, sociology, medicine, criminology, economics, and psychology. We approach this endeavor as an interdisciplinary team led by psychologists. Our article speaks to the fundamental goals of these varied interventions: to change ideas about sexual violence (broadly construed) and to reduce rates of sexual violence. On the basis of these goals, we consider the underlying psychological theories guiding primary prevention programs to reduce sexual violence, how these theories compare to psychological evidence for behavior change, and how effective these programs have been in reducing sexual violence.
We assess these questions qualitatively and quantitatively. Our qualitative review describes the explicit and implicit theories of behavior change that have guided prevention programs since 1985. We observe a throughline: Sexual-violence interventions seek to change behavior by changing attitudes, knowledge, and beliefs about sexual violence (i.e., ideas about sexual violence). We call this overall strategy the “ideas-based approach.” To interrogate the assumptions of the ideas-based approach, we review the psychological literature that looks at the efficacy of changing attitudes, beliefs, and intentions to change behavior. This approach is more successful in some domains than others and leaves open an empirical question of how well suited this approach is to the domain of sexual-violence prevention.
Targeting ideas about sexual violence is the common strategy of the three interventions we identify as “zeitgeist programs.” These pioneering and influential programs represent the prevalent approaches to sexual-violence prevention in a particular period of time. The first is Safe Dates, which seeks to educate adolescents about dating violence (e.g., Foshee et al., 1996). The second zeitgeist program is The Men’s Program, which targets men’s empathy for victims of sexual violence (e.g., Foubert, 2000). The third is Bringing in the Bystander (e.g., Banyard et al., 2007), which marks a shift to a community-focused approach. We highlight the tremendous work that has been done and discuss the relationship between these different philosophies regarding primary prevention and the resulting changes in ideas about sexual violence and behaviors observed among intervention participants.
Our quantitative review gathers data from published and unpublished evaluations of primary prevention interventions to reduce sexual violence from any discipline and uses meta-analysis to estimate the average effect of all interventions. We also present average effects according to the method of evaluation (i.e., observational, quasi-experimental, and experimental methods). In keeping with the theoretical throughline we observe, our primary analysis of interest is the relationship between idea change and behavior change identified by these studies.
We also consider the most common and best-practice measures that were used to assess the efficacy of the interventions. The Sexual Experiences Survey is the gold standard for self-reported perpetration and victimization, asking about the perpetration or experience of behaviors, including unwanted touching and coercion. For ideas, we review rape myth acceptance, a highly prominent construct in this literature that measures the endorsement of rape-permissive ideas such as “some women deserve to be raped.” Slightly fewer than half (140 of 295) of the studies that we identified for our quantitative meta-analysis measured rape myths in some form. Prior work has found rape myth acceptance to be associated with sexual-violence perpetration (Trottier et al., 2021); many evaluations take the (implicit) position that because the two are positively related, successfully intervening to decrease rape myth acceptance will lead to a decrease in violence. We find that existing sexual-violence interventions are indeed effective for changing ideas such as rape myth acceptance but not for changing behavior—the ultimate goal of any prevention intervention.
We conclude with an analysis of the difficulty of both measuring and changing violence-related behavior and some ways forward for future research. With this review, we use our unique perspective as psychologists to assess not only the efficacy of interventions designed to prevent sexual violence but also the theories and behavioral approaches that explicitly or implicitly inform them.
The Problem of Sexual Violence
Sexual violence is considered by the Centers for Disease Control and Prevention (CDC) a major public-health problem affecting the lifelong health, opportunity, and well-being of hundreds of millions of people (CDC, 2022). According to a recent report by the CDC, more than 50% of women and nearly one in three men have experienced contact sexual violence (Basile et al., 2022). Sexual violence affects a diverse array of people. Women between the ages of 18 and 25 and women of color and members of the LGBT+ community are at the greatest risk of being sexually assaulted (Fisher et al., 2010; Sinozich & Langton, 2014).
The prevalence of sexual violence on college campuses in the United States, and its destructive effects on students and the overall college environment, has been recognized as a public-health problem by a broad coalition of policymakers, researchers, and educators (e.g., Fisher et al., 2010; Hostler, 2014; Koss et al., 1987; Saul & Taylor, 2017; White House Task Force, 2014). The worldwide #MeToo movement brought attention to the problem of sexual violence, particularly in the workplace. Accordingly, there are many ongoing efforts to prevent sexual violence and mitigate its consequences (e.g., Culture of Respect, THAT GUY, It’s On Us). These and other related efforts pull on a tradition of pioneering scholarship on sexual-violence prevention, one that began courageously at the dawn of activist pushback against sexual violence (e.g., Brownmiller, 1975/2005; Connell & Wilson, 1974). To date, this tradition of scholarship has been summarized either in narrative form (e.g., DeGue et al., 2014; Rutherford, 2011, 2017) or only partially meta-analyzed (e.g., Kettrey et al., 2023; Kettrey & Marx, 2019; Mujal et al., 2021; Ruvacalba et al., 2022; Wright et al., 2020). Our current review provides empirical results from 13 countries and seeks to summarize all primary prevention efforts from 1985 to 2018. Our findings bring to light the successes and shortcomings of this literature.
Primary prevention efforts
Our review focuses exclusively on primary prevention strategies to combat sexual violence as a global and distinct class of sexual-violence intervention strategies. Primary prevention efforts are defined and distinguished from other kinds of antisexual violence efforts in the following ways (Krug, Dahlberg, et al., 2002):
Primary prevention strategies refer to “approaches that aim to prevent violence before it occurs” (World Health Organization [WHO], 2010, p. 7) and may be targeted at preventing either perpetration, victimization, or both.
Secondary prevention strategies refer to “approaches that focus on the more immediate responses to violence” (WHO, 2010, p. 7).
Tertiary prevention strategies refer to “approaches that focus on long-term care in the wake of violence” (WHO, 2010, p. 7).
All three strategies take aim at a different aspect of sexual violence in its chronology—before, immediately after, and in the longer term wake of sexual violence. Primary prevention, which aims to prevent violence from occurring at all (or to reduce its incidence), also aims to prevent revictimization and recidivism (Basile et al., 2016; Black et al., 2011; DeGue et al., 2012; Krug, Mercy, et al., 2002). Another distinction of the primary prevention approach is that it focuses not only on potential victims of sexual violence but also on potential perpetrators (Krug, Dahlberg, et al., 2002; P. M. McMahon et al., 2000). Decreasing the number of potential perpetrators is necessary to achieve significant reductions in the prevalence of sexual violence (DeGue et al., 2012).
Despite its importance in combating sexual violence, the primary prevention approach has not been studied systematically to the same extent as secondary or tertiary approaches (Arango et al., 2014; Ellsberg et al., 2015; WHO, 2010). A number of quantitative and qualitative reviews of primary prevention strategies conducted in the past (e.g., Anderson & Whiston, 2005; DeGue et al., 2014; Ellsberg et al., 2015; Kettrey & Marx, 2019; Rivera et al., 2021; Verbeek et al., 2023; Vladutiu et al., 2011) fell short of providing a bird’s-eye view that encompasses all primary prevention efforts. For example, an influential review conducted by DeGue et al. (2014) captured the full breadth of primary prevention interventions, reviewing about 140 studies, but did not provide a quantitative assessment or meta-analysis of these efforts. Other reviews provided a quantitative assessment of a smaller slice of the literature, such as bystander interventions (Mujal et al., 2021; Park & Kim, 2023), programs specifically targeting men or masculine spaces (Ruvacalba et al., 2022; Wright et al., 2020), or interventions on college campuses (Kettrey et al., 2023; Kettrey & Marx, 2019; Wright et al., 2020). In sum, what is missing from the literature is a comprehensive empirical review encompassing the full scope of primary prevention interventions to provide a much-needed bird’s-eye view of what the dominant strategies for reducing sexual violence are, how successful these interventions are in reducing violence, and where the field should go next. The current review aims to do precisely that.
Formation of Database
To review all primary prevention interventions to combat sexual violence, we began by searching the literature for all of the relevant evaluations. Our search followed biomedical meta-analytic standards outlined by PRISMA (Moher et al., 2009; Page et al., 2021) and was preregistered on PROSPERO and the OSF. We used the Proquest Summon Service (Articles+) to conduct our search for published and unpublished articles from 1985 through 2018. We used 11 different keywords to describe sexual violence and its various manifestations (e.g., dating violence*, sex*, and abuse), along with qualifying terms indicating an intervention, such as interven*, prevent, or program. We supplemented this search by reviewing existing meta-analyses and narrative reviews to identify and retrieve additional reports that met the inclusion criteria. Finally, we went through all programs listed on Culture of Respect, a website summarizing all college intervention programs to prevent sexual violence on campuses. For more information on our search and our PRISMA flow diagram, see Supplemental Appendix Figure 1, available online.
Inclusion and exclusion criteria for study selection
Our exhaustive search yielded 12,677 articles that we reviewed. The authors and a team of research assistants decided whether a study met the eligibility criteria for inclusion. The PRISMA flow diagram (Supplemental Appendix Figure 1) explains how we winnowed the original pool of studies down to the 224 articles.
To be included in this review, studies had to fit four inclusion criteria. First, because our review focuses on primary prevention strategies, to be included, a study needed to research an intervention that seeks to reduce sexual violence within the general population (rather than at-risk populations) and take place before the violence occurs (rather than in its aftermath).
Second, we included studies that conducted a programmatic evaluation using one of three common methods. The first and most rigorous group of methods are randomized control trials, which include experimental studies in which participants (or groups, classes, or schools) are randomized either to a treatment condition or to a control condition. The second group included quasi-experimental designs, in which a group that receives treatment is compared to a control group with no random assignment. Under this category, we also included studies in which participants were randomized into one of multiple treatment arms without a control condition. For example, Taylor et al. (2016) randomly assigned 23 middle schools into one of four treatment arms to test what dosage was needed to have lasting reductions in violence. The third included observational designs so long as they had both pretreatment and posttreatment measures to allow a comparison of baseline and endline. Table 1 details the number of articles and studies for each study design.
Table 1.
Number of Articles and Studies by Study Design
Study design | Articles (n) | Studies (n) |
---|---|---|
Randomized controlled trial | 72 | 102 |
Quasi-experimental | 80 | 100 |
Observational | 77 | 96 |
Note: Although we report data from 224 unique articles, some articles reported multiple studies with different designs or measured different outcomes in multiple ways. Thus, there is a mismatch between the total number of articles and studies reported here and our overall tally of unique articles and studies.
Third, we included studies conducted in various contexts (community, school, workplace, etc.), and in any country. Fourth, we included studies that assessed behavior either through self-report or observational methods or outcomes measuring ideas related to sexual violence such as rape myths or knowledge.
Our exclusion criteria were also fourfold. First, although our review aims to be as broad as possible, we excluded programs that did not focus specifically on reducing sexual-violence perpetration. A number of programs fall under this exclusion criterion. We excluded programs targeting children who are 9 years old or less or who had not yet entered 5th grade because these programs are geared toward appropriate sex education rather than the prevention of sexual violence. We excluded secondary or tertiary prevention programs that focus on addressing the aftermath of a sexual assault that has occurred as well as programs with a primary focus other than, or much broader than, sexual-violence prevention, for example, HIV-acquisition prevention (Jewkes et al., 2006, 2008), men who have sex with men, alcohol consumption, or domestic violence. We also excluded programs aimed at changing the behavior of potential targets of sexual violence (e.g., victimization prevention or risk-reduction interventions), for example, a 6-week self-defense program for Kenyan adolescent girls (Sinclair et al., 2013) or a four-meeting resistance program for first-year female undergraduate students (Senn et al., 2015). Finally, we excluded programs that targeted particularly at-risk populations such as prisoners, or men with antisocial tendencies who may respond to interventions with reactance (Malamuth et al., 2018). We did this to focus our findings on primary prevention efforts that are maximally applicable across populations—with the caveat that any intervention likely needs to be tailored to fit its context—and that explicitly and intentionally attempt to reduce perpetration behavior.
Our second criterion for exclusion was evaluations with reporting that made it impossible to estimate a program’s impact. In practice, these were either studies that did not appropriately report statistical tests or quasi-experimental and observational designs that did not provide a baseline measure that could be compared to a posttreatment measure to gauge the program’s impact. For example, we excluded nonrandomized studies that compared one program to another program but had no premeasure that would allow inferring the baseline ideas about or rates of sexual violence. For the same reason, we excluded studies in which the comparison condition included a combined sample of control participants and participants who received a different sexual-violence preventive program. We also excluded nonrandomized studies that compared combined data from multiple programs to a control condition because this did not allow a clear identification of each program’s effect.
Our third exclusion criterion was the type of outcomes measured. In this domain, we excluded studies that did not measure outcomes relevant to sexual violence. Fourth and finally, we excluded a handful of studies that were not reported in English.
All articles were initially read by either R. Porat or J.-H. Pezzuto, who coded the quantitative attributes of the study. We also gathered a team of research assistants blind to the hypotheses of the study to code qualitative attributes (e.g., the stated purpose of the intervention and how the intervention was delivered). We recorded a behavioral outcome (when available) and an ideas-based outcome (i.e., belief, attitude, perceived norm, knowledge) for each study in the database. In cases in which studies reported multiple eligible outcomes (e.g., rape myth acceptance and knowledge), we took the outcomes assigned the most priority in the abstract or article by the authors, a rule that typically favors selecting positive results of the intervention. In cases in which the outcomes were measured at multiple time points, we took both the most immediate postintervention measure and the most delayed. After this phase, given the relative advantage of experimental designs for inferring causality, all experimental studies were double-checked by at least one of the authors. Coding disagreements were all resolved by documented discussions among the authors, and all codes were double-checked by either a research assistant or an author (for all coding documentation, see the Supplemental Appendix).
Overview of the Studies in the Meta-Analytic Database
We begin by describing the database of primary prevention efforts to reduce sexual violence. The database encompasses 224 articles that describe 295 studies spanning from 1985 to 2018, for which we coded 489 distinct point estimates. Overall, these were relatively large studies, with an average of 406 participants and a median of 162 participants. In addition, 30% of these studies tested primary prevention efforts that spanned more than 1 day.
Over these years, we observed an uneven rise in the study of primary prevention efforts. This rise is depicted in Figure 1. Figure 2a portrays where these efforts were centered. We observed that 262 studies, constituting 89% of the database, were conducted in the United States. The remaining 33 studies came from Canada, Europe, Israel, and Africa.
Fig. 1.
Number of studies over time (1985–2018).
Fig. 2.
Overview of the studies in the meta-analytic database.
Figure 2b depicts the settings in which primary prevention efforts took place. Over 66% of all studies were conducted on college campuses. Another 15% were conducted in high schools, whereas 11% were conducted in middle schools. Four percent of studies were conducted in the workplace, and 3% were conducted in community settings. This relatively small number of studies conducted outside of college campuses underscores how rarely researchers study adults outside of higher education settings.
Finally, Figure 2c characterizes the gender composition of the study participants. We observed a persistent rise in the number of studies targeting both men and women. In fact, 67% of the studies targeted both men and women, whereas 28% targeted only men and 6% targeted only women. This trend likely reflects the increase in bystander approaches, which are aimed at the community level, conceptualizing both men and women as community members who can prevent sexual assault.
The Leading Approach to Reducing Sexual Violence
Our bird’s-eye view of primary prevention programs reveals that the dominant approach to reducing sexually violent behavior is to change people’s attitudes, beliefs, and knowledge about sexual violence. In other words, these interventions all target aspects of the mind—what people think and believe about sexual violence, who perpetrates it, who is victimized, why sexual violence occurs, what counts as sexual violence, what people can do to stop it, and whether those actions will be effective. We call this the ideas-based approach, and its core assumption is that people’s thoughts are the primary cause of their behavior. Virtually every intervention we reviewed took this approach in terms of both its guiding theory and specific content. This was also reflected in the kinds of outcomes measured. As depicted in Figure 3, in our database, ideas about sexual violence were consistently measured more often than behaviors (typically self-reported). For example, rape myths such as “some women say ‘no’ when they mean ‘yes’” were very frequent targets of intervention. Implicit in this approach is the notion that myths about sexual violence cause sexual violence—changing the myths should change the behavior. This paradigm has guided sexual-violence reduction since the beginning of the field.
Fig. 3.
Ideas and behavior over time (1985–2018).
We also observed a significant change over the years in the goal of primary prevention programs. Although the immediate goal at first was to reduce perpetration behavior, later on, with the rise of the bystander approach, the immediate goal became eliciting bystander behavior (we reflect on these trends when discussing measurement later). Yet the overall psychological model of behavior change stayed the same. Whether the focus is on reducing perpetration behavior or increasing bystander behavior, the implicit assumption is that if a program successfully changes how people think about sexual violence, then it will also successfully reduce instances of perpetration (and thereby victimization) either directly or via an increase in instances of bystander behavior.
This notion that ideas, including beliefs, attitudes, and other thoughts, are the primary cause of behavior—as opposed to contexts or personality—has a long-standing history in the field of psychology (see Ajzen, 1991; Ajzen et al., 2007). Numerous reviews and meta-analyses have examined the strength of the association between ideas and behavior, specifically the attitude-behavior association, which varies by attitude characteristic and context (Glasman & Albarracín, 2006). For example, across a range of behaviors, including voting and product choice, attitudes that were stable over time showed the strongest relationship to behavior (r = .37) and attitudes that were highly personally relevant and important showed the weakest relationship to behavior (r = .02). Critically, however, these relationships are predictive but not necessarily causal (Cooke & Sheeran, 2004).
Although consistent attitudes and behaviors have been found to be positively associated, broadly speaking, intervening on attitudes does not always change behavior. There are (at least) two ways to think about this dissociation. The first is that attitudes do not perfectly predict behavior. Our attitudes contribute to the formation of our intentions, but intentions do not seamlessly lead to the realization of goals. This space between what people intend to do and what they actually do has been coined the “intention-behavior gap” (Sheeran & Webb, 2016). This gap was demonstrated in a meta-analysis of 47 experiments that randomly assigned people to receive either a control or an intervention designed to change their intentions in various contexts (e.g., studying, exercising, applying sunblock, using condoms, ceasing smoking). Findings suggest that a substantial change in intention (d = 0.66) was related to a smaller change in behavior (d = 0.36; Webb & Sheeran, 2006). In recognition of this intention-behavior gap, much research has investigated the obstacles that can come between a person’s stated desired end state or goal and the achievement of that goal. People sometimes do not meet their goals because they forget to do so, prioritize another goal in the moment, find a task harder than anticipated, or were not particularly committed in the first place (Fishbach et al., 2003; Zhang & Fishbach, 2010). For example, imagine someone who intends to intervene if they overhear a sexist joke. Despite a strongly held attitude that sexist jokes are wrong, and an active goal to intervene, they may still fail to do so because they prioritize something else (e.g., social harmony) more highly in the moment. Likewise, they might fail because it was harder to think of something to say than they had anticipated. Even if a given attitude does impact behavior in a meaningful sense, the relationship is noisy and probabilistic.
The second explanation for the dissociability between an idea and a behavior is that a person can hold an idea, and even act in ways consistent with that idea, without the idea playing a causal role in bringing about the behavior. A number of domain-specific reviews provide some evidence for this lack of causal association. For example, in the health literature, a meta-analysis examining a range of communications interventions that sought to increase condom use to prevent HIV acquisition and transmission found that persuasive messaging increased knowledge and favorable attitudes toward condom use without changing behavior (Albarracín et al., 2003). In addition, we reviewed a number of rigorous evaluations in the prejudice-reduction literature (e.g., Mousa, 2020; Paluck, 2009; Scacco & Warren, 2018) that found behavioral change but not attitudinal change (Paluck et al., 2021).But it is possible to change behavior by changing ideas. One example comes from a meta-analysis of interventions aimed at changing attitudes, norms, and self-efficacy beliefs to change health-related behavior (Sheeran et al., 2016). Findings suggest that these interventions are very successful at changing attitudes, norms, and self-efficacy beliefs (d = 0.48, 0.49, and 0.51, respectively) and can have real effects on health-related behavior (d = 0.38, 0.36, and 0.47, respectively). The authors noted, however, that there was substantial variability, such that some interventions had large effects and others had none at all. Ultimately, whether changing ideas about sexual violence will lead to changes in sexual violence itself is an empirical question.
In sum, the fact that ideas about sexual violence (such as endorsing rape myths) are associated with sexually violent behavior does not guarantee that changing the ideas people have will change their behavior. As we have reviewed, the difficulty in changing behavior, especially via attitudes or intentions, is not unique to primary prevention programs to reduce sexual violence. By looking at other domains such as health and prejudice, we see that changing behavior is not solely a problem of changing people’s ideas. However, as noted earlier, the vast majority of primary prevention interventions target ideas about sexual violence to reduce sexual violence (either directly or via bystander intervention). Next, we provide an in-depth historical review of the evolution of these theoretical approaches and trends that informed primary prevention programming over the years.
Zeitgeist programs and their predecessors
In this section, we describe the intellectual progression of primary prevention interventions to reduce sexual violence. We find that the overarching behavioral approach is to change people’s behavior by changing their ideas. This overall approach remains consistent, although the more proximate theories guiding program design change. We now review these trends in the literature, emphasizing the different fields of research that contributed and the shifts in thought and theory that informed the type of programs developed. We begin by characterizing the first decade (1985–1995). Then, to illustrate theoretical trends, we describe in detail what we call zeitgeist programs in the literature starting from 1996. These studies capture the defining approach of their time, as reflected by the ideas and beliefs guiding the programs. Overall, we review three programs that exemplify the theoretical approaches that underlie the most prevalent programs: Safe Dates (Foshee et al., 1996), The Men’s Program (Foubert, 2000), and Bringing in the Bystander (Banyard et al., 2007).
The first 10 years
Our review begins in 1985 (similar to other reviews; see, e.g., DeGue et al., 2014), with the first study meeting our inclusion criteria published in 1986 (Fischer, 1986). In the first 10 years, we found 44 studies reported in 32 articles (comprising 14% of our database). Most of these studies were conducted by graduate students, consistent with many research programs breaking into the mainstream from the margins. This is also apparent when looking at the type of publication: 24 of these studies were from journal articles, and 20 were from dissertations. The researchers came from various subfields of psychology, including educational psychology and counseling psychology. The majority of these studies were laboratory studies (n = 24) as opposed to field studies (n = 20), which characterize the larger programmatic evaluations that dominate the next generation of studies. Accordingly, these early studies tended to be smaller than the next generation of studies, averaging 200 participants each. They targeted both men and women (31 interventions, which constituted 67% of all studies in this time frame), but a relatively high number of studies targeted only men (11 studies, or 24% of all studies in this time frame). We acknowledge this important groundwork here. We did not find a singular perspective among these studies that could be well represented by one program.
In subsequent years, three interventions exemplify trends in the primary prevention space. We review Safe Dates, an education program aimed at teaching adolescents about dating violence; The Men’s Program, which seeks to increase empathy among men for victims; and Bringing in the Bystander, which marks a considerable and ongoing shift away from changing perpetration directly and toward increasing bystander behavior.
Safe Dates
Vangie Foshee’s Safe Dates program, which was evaluated beginning in 1996 by a longitudinal randomized field experiment, is one of the most thoughtful and well-studied educational curricula aimed at teaching adolescents about dating violence and relationship skills, regardless of gender. It was groundbreaking in its ambition and evaluation and represents a major shift in the literature for both its methods and content.
Safe Dates makes use of multiple strategies, including a play performed by students, a poster contest, and a 10-session curriculum. For these reasons, it is one of the first programmatic evaluations in the literature that looks at a scalable, well-structured curriculum that can easily be implemented in the field. It inspired later programs, including Shifting Boundaries, one of the best evaluations we encountered, which we elaborate on later. The theoretical rationale binding these three components is that perpetration and victimization may be decreased by changing dating-abuse norms and gender stereotypes and improving students’ interpersonal skills, including positive communication, anger management, and conflict resolution (Foshee et al., 1996, 1998). Safe Dates approaches the problem of sexual violence as an education problem. The intensive, thoughtful, interactive intervention takes place in schools and is explicitly referred to as a curriculum. The curriculum is designed to change ideas that students have about dating violence—especially norms and stereotypes—and provide communication skills that are put forward as the causal mediators that will lead to a decrease in perpetration.
The play, which is an essential part of the curriculum, is about an adolescent who seeks help with her relationship with a violent partner. It is performed by students at the school and is followed by a discussion led by the actors. The curriculum contains numerous modules that begin with “Defining Caring Relationships” and end with “Preventing Sexual Assault.” To address norms, students complete worksheets with their peers about the consequences of dating violence, assess dating conflicts together, and work together to define dating abuse. To learn about gender stereotypes, students write about their own experiences with gendered expectations and analyze scenarios discussing how gender stereotypes relate to dating violence. To improve conflict-management skills, students participate in role-playing exercises. Finally, posters are made and judged by students, and winners of the best posters get a prize.
In terms of evaluation, Safe Dates represents the first large-scale randomized experiment in the literature, setting rigorous standards for future evaluations. This is not surprising given it is also one of the first studies by scholars who were all based within a school of public health, also demonstrating that a new discipline had joined the efforts to develop and evaluate primary prevention programs. Foshee et al. (1998) reported on a cluster-randomized control trial conducted in 14 public schools with more than 1,800 adolescents in North Carolina. The trial compared students in the treatment group to those in a control group who were exposed to community activities unrelated to sexual violence. It was the first randomized experiment to use a primary prevention program that was tested in the field, with a nonconvenience sample of eighth and ninth grade students rather than undergraduate college students. Although there were a number of previous evaluations that were done on younger adults (e.g., Avery-Leaf et al., 1997; Feltey et al., 1991; Pohl, 1990), this was the first randomized evaluation outside of a college campus. The Safe Dates evaluation was also the first to assess the program’s longitudinal impact in a series of five influential articles (see Foshee et al., 1996, 1998, 2000, 2004, 2005). Findings from this five-wave data collection suggest that adolescents exposed to the program reported less psychological, physical, and sexual dating-violence perpetration and less physical dating-violence victimization at all follow-up periods. These behavioral effects were explained by changes in dating-violence norms, gender stereotyping, and awareness of services (Foshee et al., 2005). The long-term effectiveness of Safe Dates was not improved by a “booster” 2 to 3 years after the initial intervention; in fact, analyses suggested this booster may have had negative effects.
The Safe Dates curriculum has also been adapted for use in other contexts. Participation in a family-based teen dating-abuse prevention program (Families for Safe Dates; Foshee et al., 2012) was significantly associated with caregiver engagement in the prevention of teen dating abuse, decreased teen acceptance of dating abuse, and the prevention of victimization. Additionally, an adapted version of the school-based Safe Dates program was associated with significant increases in domestic-violence knowledge among high school students in Haiti (Gage et al., 2016), although the findings pointed to the need for additional examinations of gender differences.
One significant impact of Safe Dates is the creation and evaluation of a similar intervention called Shifting Boundaries. Shifting Boundaries is similar to Safe Dates in that it is an education program in schools. And it is exemplary in its rigorous evaluation. Shifting Boundaries was created in response to Safe Dates and other interventions that were geared toward older adolescents and young adults (Taylor et al., 2013). The intervention targets middle school students and consists of six classroom sessions administered by trained school personnel over a period of 6 to 10 weeks. These sessions covered topics including state and federal laws related to dating violence and harassment, consequences for perpetrators, strategies for communicating boundaries in interpersonal relationships, and the importance of bystander intervention. Additionally, Taylor et al. (2013) developed a building-level intervention that included building-based restraining orders (Respecting Boundaries Agreements), posters to increase awareness of dating violence and harassment, and increased presence of faculty or school security in “hot spots” identified by students.
Shifting Boundaries has been rigorously evaluated numerous times (e.g., Taylor et al., 2010, 2013, 2017). Taylor et al. (2013) assessed whether both the classroom and building components were necessary to decrease violence. To do so, they randomly assigned 30 public middle schools in New York City to have the classroom intervention only, the building intervention only, both interventions, or neither. Students completed surveys assessing their experiences with sexual harassment and sexual violence (including perpetration and/or victimization), their knowledge of sexual harassment and sexual violence, and their behavioral intentions at baseline, immediately after the intervention, and 6 months postintervention. The evaluation suggested that the building-level intervention effectively reduced sexual-violence perpetration and victimization both as a stand-alone intervention and in combination with the classroom-based intervention. However, the classroom-based intervention alone was not effective in reducing sexual violence (Taylor et al., 2015). They also noted an unexpected finding: that the interventions were associated with an increase in the prevalence of sexual-harassment victimization (Taylor et al., 2013, 2015).
In another randomized evaluation, Taylor et al. (2017) examined the saturation levels necessary to reduce violence. Specifically, they tested whether the program needed to be delivered to all three middle school grades or just one or two. To do so, they randomly assigned 23 public middle schools in New York City to receive the Shifting Boundaries intervention for (a) sixth graders only, (b) sixth and seventh graders, or (c) sixth through eighth graders. They found that providing the treatment to only one grade level was equally effective in terms of peer violence and dating-violence outcomes as a more saturated process of treating multiple grades. In addition, they found that at both the 6-month and 12-month assessments additional saturation beyond one grade was associated with reductions in sexual-harassment victimization. However, one surprising finding was that the delivery of the Shifting Boundaries program to more than one grade level was unexpectedly associated with more reported perpetration of sexual violence at 12 months posttreatment compared with the sixth-grade-only group (Taylor et al., 2016). Although the authors noted that the results may have been spurious, it is also possible that there were unexplored measurement errors or backlash effects. This warrants further investigation (Taylor et al., 2017).
The Men’s Program
The second zeitgeist we observed in the literature, which emerged around the same time as Safe Dates, were interventions that sought to reach men in particular. This strategy is best exemplified by The Men’s Program. This program aims to prevent sexual-violence perpetration among men by increasing the empathy and support they offer to victims of sexual violence and by reducing their resistance to violence-prevention programming (Foubert et al., 2006). Foubert et al. (2006) argued that existing prevention interventions were unlikely to be effective because they “[assumed] all-male program participants to be potential rapists” (p. 134). The Men’s Program engages men as potential helpers of victims and begins by eliciting empathy for victims of sexual violence.
Participants in the program watched a 15-min dramatization of a male police officer who was raped by two other men and then dealt with the aftermath of the assault. Trained peer educators then told the participants that the perpetrators were heterosexual and known to the victim and attempted to draw connections between the male police officer’s experience and common sexual-violence experiences among women. Participants were then taught strategies for supporting a rape survivor, definitions of consent, and strategies for intervening when a peer jokes about rape or disrespects women and in situations in which a rape may occur. As part of the 1-hr program, men were also taken through a guided imagery activity and asked to imagine that a woman close to them was raped while a bystander watched and did not intervene. The goal of the program was to increase men’s empathy for and desire to help victims of sexual violence and thereby decrease the likelihood that they would commit sexual violence in the future.
The Men’s Program was created to reduce reactance from male audiences and to move them and empower them to help potential victims of sexual violence. The psychological target of the intervention is empathy. The main content of the programming is learning about the experience of a male victim and drawing similarities to female victims. The intervention approaches men as potential allies who can help out women once they understand the gravity of sexual violence, its many forms, and how to help. The Men’s Program was novel in its approach to its audience of men because it engaged them as potential allies to victims instead of implying they are potential perpetrators. This approach is still very much the dominant form of messaging in primary prevention programs, as we discuss in our third zeitgeist program, Bringing in the Bystander, below.
The Men’s Program was evaluated through open-ended survey questions and focus groups immediately after the conclusion of the program (Foubert & Cremedy, 2007; Foubert et al., 2006; Foubert & Marriott, 1996). In multiple samples of undergraduate men on college campuses, approximately half of participants reported that their attitudes did not change as a result of the program (Foubert & Cremedy, 2007; Foubert et al., 2006). Although some men clarified that this was because they already agreed with the program’s message, it is unclear whether the remainder agreed with the message or were unreached by the intervention. In both survey responses and focus groups, participants reported increased awareness of sexual violence, improved understanding of consent, and greater belief in their ability to help victims of sexual violence (Foubert & Cowell, 2004; Foubert & Cremedy, 2007; Foubert et al., 2006). When asked how they expected their behavior would change as a result of the program, approximately one quarter to one third of men anticipated no behavior change (Foubert & Cremedy, 2007; Foubert et al., 2006; Foubert & Marriott, 1996). As with their attitudes, it was unclear whether this was because participants felt their behavior was already in line with the program’s message. However, the majority of participants reported intentions to change their own behavior by intervening if a rape might occur or by using more caution to avoid coercion in intimate situations (Foubert & Cremedy, 2007; Foubert et al., 2006; Foubert & Marriott, 1996). Foubert and La Voy (2000) assessed the lasting impact of The Men’s Program on participants’ ideas about sexual violence through open-ended survey questions administered 7 months after program participation. Over half of participants reported their ideas about sexual violence, specifically their endorsement of rape myths, and their intentions to help stop potential sexual violence, were changed by the program. Note that The Men’s Program lacks the same rigorous evaluation as Safe Dates—the evaluations have much smaller sample sizes and treatments were delivered to and assigned to men in their fraternities, but then effects were analyzed as if the treatment were randomized to individuals.
Bringing in the Bystander
Bringing in the Bystander is a program that represents a sharp turning point in the focus of primary prevention for sexual violence. Although there were some evaluations of bystander interventions in the 1970s and 1980s (e.g., Harari et al., 1985), bystander interventions became popular only later on partially because of The Men’s Program’s influential approach of viewing men as potential allies rather than solely as potential perpetrators. Bringing in the Bystander was first evaluated in 2007 and puts helping others in danger and speaking up against sexist ideas (i.e., “bystanding”) at the center of the intervention. As a result, the target behavior change is moved from decreasing perpetration behavior to increasing bystander behavior. The intervention is aimed not at men as potential perpetrators and women as potential victims but everyone as a potential person who can intervene and stop sexual violence. Bringing in the Bystander has been massively influential in programming across the United States, where the vast majority of these interventions have been evaluated. It was featured in the first report of the White House Task Force to Protect Students From Sexual Assault (White House Task Force, 2014), and nearly every intervention reviewed since 2006 has had a bystander component to it.
The Bringing in the Bystander program involves a 90-min or 4.5-hr session administered to single-sex groups of undergraduate students. Two trained facilitators (one man and one woman) introduce the concept of bystander intervention, discuss the role of community members in preventing sexual violence, and help participants develop skills to safely intervene. There is now a shift in the targeted behavior to change: Bystander interventions aim to increase bystander behavior and do not typically measure self-reported perpetration or victimization. There is an underlying assumption that if rates of bystanders intervention go up, then rates of perpetration and victimization will go down.
The theory behind Bringing in the Bystander is based on an ecological model (e.g., Bronfenbrenner, 2005; Kelly, 2006; WHO, 2005) that analyzes factors that go into human behavior at the level of peers and families (e.g., what my friends think), larger communities that people exist within (e.g., what does my college president think) and are adjacent to but do not directly participate in (e.g., what happened at that party last night), as well as larger societal and cultural beliefs (e.g., what is news coverage like of assault cases; Banyard, 2011). In the ecological model, rape myths, a target of intervention across all three zeitgeist studies, are understood as a societal-level factor (rather than an individual belief) that makes it difficult to identify a potentially dangerous situation and decreases an individual’s sense of responsibility to help. It is worth noting that the factors identified at varying levels from the individual to the individual’s immediate social circle and to society at large are all ideas at varying distances from the individual in the community. They are attitudes, norms, and beliefs.
Banyard et al. (2007) conducted a rigorous and influential experimental evaluation of the Bringing in the Bystander program. They assessed participants’ knowledge and attitudes about sexual violence, rape myths, and actual bystander behaviors; attitudes toward bystander intervention; and confidence in their ability to intervene. Participants reported on their bystander behaviors by answering “yes” or “no” questions about 51 behaviors over the past 2 months. These behaviors included actions such as walking an intoxicated friend home or responding to someone calling for help at night. These outcome variables were assessed among the treatment groups (those who participated in the 90-min or 4.5-hr program) and a control group at pretest and posttest, as well as follow-ups at 2 months, 4 months, and 1 year after the conclusion of the program. Participants in both treatment groups demonstrated decreased rape myth acceptance, increased knowledge of sexual violence, increased prosocial bystander attitudes and feelings of self-efficacy, and increased self-reported bystander behaviors compared with the control group. These effects persisted 2 months after the program, potentially because of a brief booster session administered at that time. Although the program’s effects on participants’ knowledge and attitudes also persisted at the 4- and 12-month follow-ups, effects on self-reported bystander behaviors did not persist beyond the 2-month follow-up. It is important to note here again that the self-reported behaviors of interest are bystander behaviors and not rates of perpetration and victimization.
Moynihan et al. (2010) further evaluated the impact of the program among a sample of college athletes. Participants in the treatment group and a control group completed measures of rape myth acceptance, bystander efficacy, and intention to help at pretest, posttest, and a 2-month follow-up. Participants also self-reported bystander behavior at pretest and the 2-month follow-up. As in the previous evaluation, the Bringing in the Bystander program significantly impacted participants’ attitudes toward bystander intervention and behavioral intentions but did not lead to the desired changes in actual bystander behavior. The intervention itself has also been the subject of an entire meta-analysis (Bouchard et al., 2023) that reported an overall reduction in endorsement of rape myths and increased self-efficacy beliefs and intentions to be an active bystander. Too few studies measured bystander behavior for analysis.
Bystander interventions such as Bringing in the Bystander have, in effect, taken over the marketplace of ideas of how to prevent sexual violence. This is in large part due to its promising effects on rape myth acceptance and intentions to intervene and its many evaluations (Bouchard et al., 2023). An additional lasting impact of Bringing in the Bystander are other bystander interventions such as Green Dot (Coker et al., 2016), which was first evaluated by Coker et al. (2011) and is currently recommended by the CDC alongside Bringing in the Bystander (Basile et al., 2016). In approaching the audience as potential allies, regardless of gender, bystander interventions may also be effective in reducing reactance not only with audience members of the intervention but also among those who select interventions to be implemented at their universities and workplaces.
Zeitgeist programs summary
In our search for primary prevention interventions to reduce sexual violence, we observed a substantial shift in focus over time. Pioneering prevention efforts (in the 1990s and early 2000s) focused on education in schools for adolescents about dating violence (exemplified by Safe Dates), as well as efforts to increase empathy for victims among men (exemplified by The Men’s Program). Then we observed a substantial shift in theorizing about sexual violence, as exemplified by Bringing in the Bystander. Looking at the community as a whole rather than identifying men as potential perpetrators and women as potential victims, these efforts identify all participants as members of the community and aim to empower them to recognize potential situations to intervene—and to do so when necessary.
As we see it, there are positives and negatives to this shift. Bystander interventions seem to be very successful in generating buy-in across institutions (including the White House in 2014) and seem to change rape-related attitudes and decrease rape myths, which can shift culture and peer networks (and in turn affect others’ individual attitudes; Swartout, 2013). In our view, one downside of this move toward bystander-intervention programming is that it changes the behaviors that are targeted for intervention (and evaluation) from reductions in perpetration behavior to increases in bystander behavior. There is evidence in some studies that bystander training can reduce victimization and sometimes (but not always) perpetration on college campuses (Coker et al., 2016; Gidycz et al., 2011). However, whether increasing bystander behavior leads to reductions in sexual-violence perpetration has yet to be rigorously tested.
All interventions we reviewed hypothesized that they would change ideas about sexual violence, and all included some information about helping others in need or intervening on behalf of others. We now turn to our quantitative meta-analysis to assess the efficacy of these and other primary prevention interventions to reduce sexual violence. We prioritized measurements of rape myth acceptance and reports of behavior as measured by the Sexual Experiences Survey (SES) or bystander behavior. We pay special attention to the relationship between changed ideas and changed behavior. We also examine how frequently bystander interventions measure perpetration and victimization in our database, and among those interventions that do, whether they change them.
Trends in Measurement in the Database
As established, the underlying psychological theory guiding the overwhelming majority of efforts in the literature at this point is that perpetration and bystander behavior may be changed by dispelling myths, changing attitudes, and providing education regarding sexual violence. We now turn to our database to examine the meta-analytic effect of primary prevention interventions, as well as the association between changing ideas about sexual violence and changing behavior in the data. To do so, we first describe the measures we coded and how we analyzed the data. We focus particularly on randomized evaluations because they are best situated to tell us about the causal link between these two factors.
Overall, outcomes referring to an individual’s ideas about sexual violence outnumber behavioral outcomes in our sample by 346 to 143. In the studies we coded, 199 measured outcomes about ideas only, 34 measured only behaviors, and 62 measured both. Figure 3 shows that behavioral measurements became more common over time, particularly from 2010 onward. Although the ratio between ideas-based outcomes and behavior outcomes in the first 15 years was 1:4.59, after 2010 it dropped to 1:1.34. Figure 3 displays the relationship between these outcomes over time.
Measuring sexually violent behavior
The difficulty of measuring sexually violent behavior and the dearth of this measurement is a long-standing and well-recognized issue in the primary prevention of sexual violence (DeGue et al., 2014; Ellsberg et al., 2015; Koss & Oros, 1982). In this literature, behavior is typically measured through retrospective self-reports. We did not include measures of behavioral intentions that focus on predicting one’s future behavior because these kinds of measures are particularly subject to demand, and medium-to-large changes in behavioral intentions are needed to observe small-to-medium changes in behavior (Webb & Sheeran, 2006). Accordingly, we included studies that measured (overwhelmingly, self-reported) behavioral outcomes that fall into four categories: perpetration, victimization, bystander, and involvement. Figure 4 demonstrates the trends over time in these measurements. We observe that perpetration and victimization have been the primary behavioral outcomes over the years. However, with the rise of bystander interventions, we notice a rise in measurement of self-reported bystander behaviors. We now turn to discuss each of these four behavioral categories.
Fig. 4.
Behavioral outcomes over time.
The first two behavioral categories are perpetration and victimization. “Perpetration” outcomes are measures of whether, and how often, respondents have perpetuated sexual violence, whereas “victimization” outcomes are measures of respondents having been a victim of sexual violence or harassment.
The most common and rigorous measure we encountered for measuring perpetration and victimization is the SES (Koss et al., 2007, 2023; Koss & Oros, 1982), which is a self-reported, retrospective assessment of one’s perpetration and victimization. The strength of the SES lies in its lack of judgment or labeling of sexual-violence behaviors, which is intended to encourage self-reporting. The SES lists a number of “unwanted” sexual behaviors and asks the respondent whether they have done these behaviors (to measure perpetration) or have had these behaviors done to them (to measure victimization). The SES covers a wide range of behaviors and so does not merely cover more prototypical scripts of sexual violence (Peterson & Muehlenhard, 2007). The original form of the SES asked “yes” or “no” questions. For example, “Have you given in to sex play (fondling, kissing, or petting, but not intercourse) when you didn’t want to because you were overwhelmed by a man’s continual arguments and pressure?” The SES was later updated by a consortium of experts (see Koss et al., 2007; Testa et al., 2004). These updates included more forms of coercion, a different response method, and a specified time frame. In this updated version, participants are asked about a specific act (as opposed to a number of acts); for example, “Someone had oral sex with me or made me have oral sex with them without my consent.” Participants are then provided with multiple options describing different forms of coercion, including intoxication, manipulation, threats of violence to self or loved ones, and physical force. Next to each of these options are two columns of tick boxes, one for the event having happened in the last 12 months and the other for the event having happened since the respondent was 14 years old (Koss et al., 2007).
The SES is considered the gold standard for the measurement of perpetration and victimization. There are also a few difficulties with using the SES. There are multiple ways to score the SES (Davis et al., 2014). If separate perpetrator and victim surveys are not administered it can be hard to tell whether the respondent was a perpetrator or victim, and it can also be difficult to tell whether any reported unwanted sexual acts occurred all in the same incident or over multiple repeated incidents and whether they reflect multiple instances with one perpetrator or multiple perpetrators. And, of course, the SES is a self-report measure of events, not a direct observation of behavior (Koss et al., 2007). Notably, the SES is currently being revised to address some of these limitations (see Koss et al., 2023).
The third and fourth categories represent categories of behaviors that became popular with the rise of bystander interventions. The third category, “Bystander” behaviors, measures whether/how often participants “intervene . . . in cases of sexual violence before, during, and after incidents with strangers” (Banyard et al., 2007, p. 463). The most common dependent variable in this category is the Bystander Behavior Scale, which includes “yes”/“no” responses to prompts such as if participants have asked “a friend if they need to be walked home from a party” (Darlington, 2014, p. 41) as well as items that describe speaking out against social norms that support sexual violence (e.g., spoke up against sexist jokes, spoke up against commercials depicting violence against women; Kleinsasser et al., 2015, p. 231).
The fourth category, “Involvement” behaviors, represents measures of participants’ interest in participating in sexual-violence awareness and prevention activities. For example, the opportunity to volunteer for a campus-wide rape-education organization is measured by whether participants tore off the bottom of a page with “the number of this organization in order to call or volunteer” (Gillies, 1997, p. 51).
Measuring ideas about sexual violence
We encountered a multitude of scales measuring individuals’ ideas about sexual violence—their attitudes, knowledge, perception of norms, belief in myths, and more. After going through all articles reviewed in DeGue et al. (2014), we decided to prioritize those that were most common in the literature and were most in line with the psychological rationales guiding primary prevention efforts. As stated in our preregistration, the most common outcomes we found were myths regarding rape and sexual violence. However, in cases in which these were not measured, we accepted any type of norms or knowledge scale that was common in the literature and part of the theoretical rationale guiding the literature. As noted earlier, in cases in which studies reported on multiple eligible outcomes, we took those that were assigned the most priority by the authors of the study.
The most common myth scales we encountered were the Rape Myths Acceptance Scale (RMA) and the Illinois Rape Myth Acceptance Scale (IRMA) in its long and short form. Overall, these and other myth measures constitute 57% of the measures we coded that assessed ideas about sexual violence. Beliefs in rape myths are frequently measured because they are considered a proxy for perpetration. In communities in which there were higher levels of endorsement of rape myths, sexist beliefs, and norms, there were also higher rates of violence against women (for a review, see Casey & Lindhorst, 2009). Thus, it is purported that rape myth acceptance—in individuals and within communities—contributes significantly to what makes a culture prone to or permissive of sexual violence (see Canan et al., 2018).
Scales about sexual violence or rape myths typically contain four elements: disbelief of rape claims, the belief that victims are responsible for rape, that rape reports are manipulation, and that rape happens only to certain kinds of women (Briere et al., 1985; Burt, 1980; Payne et al., 1999). Rape myth acceptance has been found to moderate the relationship between associating sex with power and domination and responses on a rape-proclivity measure (Chapleau & Oswald, 2010). Critically, Trottier et al. (2021) conducted a meta-analysis on the relationship between rape myth acceptance and perpetration that covered 28 studies from 1992 to 2018 in the United States and Canada and found a correlation (r) of .23, such that a greater acceptance of rape myths was associated with higher rates of perpetration regardless of the specific scale used to measure myths (i.e., IRMA, RMA, etc.). This positive association does not necessarily indicate that rape myths cause sexual violence perpetration, however, or that a reduction in rape myths is followed by a reduction in rape perpetration. A third variable may cause both myth acceptance and perpetration, or there could be a stronger causal relationship in the opposite direction. For example, the perpetration of violence could lead to the justification and minimization of that violence. Accordingly, whether intervening to dispel rape myths (or any other rape-related ideas) would change perpetration behavior in an observable way is a separate empirical question. To answer this question we now turn to the empirical data collected and examine both the overall treatment effects of primary prevention programs as well as the specific association between ideas about sexual violence and the occurrence of sexually violent behavior.
Quantitative Meta-Analysis
Meta-analytic approach
Our meta-analytic methods closely follow those laid out and developed in previous meta-analyses by the authors (Paluck et al., 2019, 2021). When possible, all outcomes were converted to Glass’s Δ—a standardized effect size measure that is equivalent to Cohen’s d except that the effect size is expressed in terms of the standard deviation of the control group rather than of the entire population (Cooper et al., 2019). When this procedure proved infeasible, outcomes were converted to Cohen’s d. These estimates were used as inputs to a random-effects meta-analysis. Further details of our methods, as well as a discussion of how we resolved some statistical issues that we did not anticipate in our preanalysis plan, can be found in the Supplemental Appendix. Our code and data can be found on the OSF at https://osf.io/w9hqs. We aimed for full analytic reproducibility according to the standards laid out in Polanin et al. (2020).
All statistical procedures were conducted in R (Version 4.1; R Core Team, 2023) using the tidyverse (Wickham et al., 2019) and metafor (Viechtbauer, 2010) packages, as well as custom functions. In what follows we first report on the overall meta-analytic results and the lack of publication bias found in this literature. We then move to meta-analyze outcomes for behaviors and ideas-based outcomes separately, paying close attention to behavioral outcomes in general and perpetration outcomes in particular. Finally, we turn to studies that measure both behavioral and ideas-based outcomes to test the overall association between these two constructs and examine whether changing ideas related to sexual violence leads to a change in perpetration behavior.
Overall results and (lack of) publication bias
Our random-effects meta-analysis of all primary prevention strategies from 1985 to 2018 revealed an overall effect size (Δ), across all interventions and outcomes, of 0.28 (SE = 0.025). This effect was statistically significant (p < .0001) and corresponded to a small-to-medium effect size by convention. Further, this finding was robust to variations in estimation strategy (see Supplemental Appendix Section 3).
Next, we tested for publication bias within the literature. Publication bias occurs when studies showing evidence of intervention success are more likely to be published in academic journals. This problem has been highly prevalent in psychology and other social-science literature (Open Science Collaboration, 2015; Franco et al., 2014; Paluck et al., 2021). Thus, we conducted three checks to determine whether publication bias is prevalent in this literature as well. First, a telltale sign of bias is a strong positive relationship between effect sizes and their corresponding standard errors. In our database, we found this relationship between effect sizes and standard errors to be weak and nonsignificant (β = 0.16, SE = 0.178, p = .379). This suggests that studies with null or insignificant effects are not systematically more likely to be found in large studies rather than small ones, which is evidence against the presence of a “file-drawer problem” (Rosenthal, 1979).
Second, the relationship between standard errors and overall effect sizes remained nonsignificant within randomized (n = 102, β = 0.21, SE = 0.252, p = .415) and quasi-experimental (n = 100, β = −0.145, SE = 0.407, p = .722) evaluations, providing further evidence to the lack of publication bias within the literature. However, we found a significant relationship within observational designs (n = 96, β = 2.22, SE = 0.658, p < .001), suggesting some publication bias within these studies. This implies greater pressure specifically on studies with observational designs to demonstrate significant findings as a precondition of getting published.
Our third check for publication bias was to separate the literature into studies published in peer-reviewed journals (n = 189) and those that were not (n = 106). Our assumption is that if publication bias occurs, it should be evident within studies that were published in academic journals specifically. In our sample, being unpublished was associated with a small reduction in effect size (β = −0.044, SE = 0.044) but not at a statistically significant level (p = .32). Within published studies, the relationship between standard errors and overall effect sizes was tiny and insignificant (β = 0.008, SE = 0.199, p = .914); although the association between the two variables was larger in unpublished studies (β = 0.603, SE = 0.366), the relationship was still not statistically significant (p = .101).
Overall, we found much less evidence of publication bias than found in previous meta-analyses (e.g., Paluck et al., 2019, 2021). We speculate that the public-health literature, from which many of these studies were drawn, may be less prone to publication pressure favoring positive results compared with the publication pressures observed in social-science departments (Simmons et al., 2011).
The meta-analytic effect of primary prevention on behavior and on ideas about sexual violence
Next, we took a close look at the impact of primary prevention programs on how people behave and how people think about sexual violence. Because the ultimate goal of primary prevention programs is to reduce perpetration behavior, we paid careful attention to behavioral outcomes. However, given the prevailing ideas-based approach to preventing sexual violence, we also closely examined whether primary prevention efforts were effective in changing behavior, ideas, both, or neither.
Overall, we found that behaviors were much more resistant than ideas to change. The overall effect size for behaviors was Δ = 0.071 (SE = 0.022, p = .002), whereas the overall effect size for ideas-based outcomes was Δ = 0.371 (SE = 0.031, p < .0001). The observed discrepancies between the magnitude of change in behavior compared with ideas were in line with past meta-analytic estimates. DeGue et al. (2014) wrote that although three prior meta-analyses “reported small to moderate mean effects on attitudes ranging from 0.06 to 0.35 . . . Anderson and Whiston (2005) reported . . . small mean effect sizes for . . . incidence of sexual violence (0.12)” (p. 347). We also observed a negligible, statistically nonsignificant relationship between effect size and year of publication (β = 0.003, SE = 0.002), suggesting that there is general consistency in effect sizes over time.
We further explored the meta-analytic effect of primary prevention interventions on behavior and on ideas about sexual violence by study design (see Table 2). This analysis revealed that the discrepancy between changing behaviors and changing ideas was substantial across every design category, and in particular in observational research, in which outcomes pertaining to an individual’s ideas about sexual violence were more than 20 times larger than behavioral responses. Turning specifically to primary prevention programs’ effect on behavioral outcomes, we observed that the largest behavioral effects were found within randomized studies, which are the most rigorous designs, whereas the smallest were found within observational designs. However, although the meta-analytic effect for behavioral outcomes was statistically significant (p < .05) in both randomized and quasi-experimental designs, neither meet the conventional standard for a “small” effect size (Δ = 0.2).
Table 2.
Behavioral and Ideas-Based Effects Within Different Subsets of Study Design
Study Type | Ideas | Behavior |
---|---|---|
Randomized control trial | Δ = 0.318***, SE = 0.04, n = 88 | Δ = 0.109*, SE = 0.045, n = 43 |
Quasi-experimental | Δ = 0.287***, SE = 0.037, n = 87 | Δ = 0.073*, SE = 0.033, n = 35 |
Observational | Δ = 0.509***, SE = 0.069, n = 89 | Δ = 0.021, SE = 0.034, n = 18 |
p < .05. **p < .01. ***p < .001.
Do bystander interventions change sexually violent behavior?
As noted earlier, a bystander approach has prevailed in the literature on primary prevention since around 2007. The underlying assumption is that increasing bystander behavior will decrease perpetration and victimization. We tested this assumption within the 95 bystander-related interventions in our database, the majority (77/95) of which measured ideas about sexual violence such as beliefs, attitudes, and knowledge and were successful at changing them (Δ = 0.392, SE = 0.068, p < .0001).
Of the 95 bystander studies, 42 measured behaviors. Overall, 22 studies measured whether bystander interventions increase bystander behaviors, which they did to a modest extent (Δ = 0.154, SE = 0.056, p = .011). In our database, 19 bystander studies measured perpetration or victimization, the ultimate goal of these interventions. We did not find that bystander interventions meaningfully changed rates of perpetration (Δ = 0.019, SE = 0.041, p = .640) or victimization (Δ = −0.009, SE = 0.041, p = .835). This suggests, unfortunately, that nearly one in three studies in our database pursued a theory of behavior change that is not grounded in reality.
A closer look at behavioral outcomes
The picture becomes grimmer still when we separately meta-analyze the four categories of behavior present in our database: perpetration, victimization, bystander behaviors, and involvement. Table 3 shows the number of studies that measured each behavioral category and their pooled effect sizes. We also provide the overall effect size for ideas-based outcomes for reference. We found that on an aggregate level, primary prevention programs have not been effective in reducing perpetration and victimization behaviors at all. The corresponding effect sizes for these two critical outcomes are neither meaningful in magnitude nor statistically significant. It appears that primary prevention programming is currently effective in increasing only bystander or involvement behavior—and changing ideas about sexual violence. There is a lack of correspondence between changing ideas about sexual violence and reducing perpetration or victimization—the ultimate goal of any primary prevention intervention. We see this as an urgent problem: The prevailing model for how to go about reducing sexual violence does not appear to be a fruitful one.
Table 3.
Effect Size by Outcome Type
Studies, n | Δ (SE) | |
---|---|---|
Perpetration | 53 | 0.032 (0.021) |
Victimization | 40 | 0.046 (0.029) |
Bystander | 23 | 0.129* (0.059) |
Involvement | 9 | 0.236* (0.088) |
Ideas-based | 261 | 0.371*** (0.031) |
p < .05. **p < .01. ***p < .001.
Given this concerning lack of evidence for reductions in victimization and perpetration, we next zoomed in on perpetration outcomes within randomized controlled trials, the gold standard for primary prevention research. Our sample featured 28 randomized designs that measured perpetration that collectively produced a meta-analytic effect of 0.086 (SE = 0.061, p = .173). Of these 28 studies, in the authors’ own words, 16 reported that their interventions did not produce a significant reduction in perpetration behavior, eight reported positive reductions, and two reported backlash effects on net. Taken together, these analyses are consistent with either a weak positive effect or an overall null effect for perpetration outcomes. For a closer look at two studies that illustrate the magnitude of behavioral changes in the literature, see Supplemental Appendix Section 7.
In sum, we have gone from an encouraging, meaningful overall average effect of primary prevention programs to a much more discouraging finding that, on the most crucial outcomes (perpetration and victimization), the literature on primary prevention has not yet found its footing. Perhaps this should not be surprising—we should not expect worldwide public-health problems to be easy to solve. However, these results also provide the backdrop for our claim that behavioral problems require behavioral solutions.
Do changes in ideas predict changes in behaviors?
A key question for the literature on primary prevention concerns the association between ideas about sexual violence and sexual violence itself. Specifically, a primary goal of this review was to examine whether the predicted association between changing ideas and changing behavior actually prevailed in the database. To do so, we focused on the 62 studies that measured both types of outcomes. In this subset of studies, the pooled behavior-change effect was Δ = 0.083 (SE = 0.029, p = .007), whereas the pooled ideas-change effect was Δ = 0.292 (SE = 0.039, p < .0001). In other words, the ideas-change effects were slightly lower than the overall effect we obtained when looking at the full database (Δ = 0.371, SE = 0.031), whereas the behavior-change effects were slightly larger than the overall effect (Δ = 0.07, SE = 0.022).
Unfortunately, in these studies, we found a small, statistically nonsignificant correlation, r(60) = .136, p = .293, between changes in ideas and changes in behaviors. The relationship remained small and nonsignificant when looking solely at randomized evaluations (r(27) = .138, p = .48).
Figure 5 visually depicts this lack of correlation. Specifically, each point in the figure represents a study that measured both ideas and behaviors, with the average effect size for ideas about sexual violence (within a given study) on the x-axis and the average within-study behavioral effect size on the y-axis. The dotted black line shows what a correlation of 1.0 would look like, and the dotted gray line shows the correlation that we actually observe. Studies are color-coded by design.
Fig. 5.
Correlation between ideas-based outcomes and behavioral change.
In sum, although ideas such as myths about sexual violence appear malleable, the problem of sexual violence itself is much more resistant to change, and positive change on one front does not reliably lead to progress on the other.
Discussion of quantitative results
Our quantitative results are a bit of a Rorschach test. An optimist could look at this literature and conclude that overall we have a pretty good grip on how to change attitudes, beliefs, and knowledge about sexual violence, and although the overall effect size for behaviors was small (Δ = 0.071), it was also precisely estimated (p = .002), which suggests that the changes were real. These results were not driven by publication bias; in fact, stronger designs tended to reveal stronger effects on behavioral outcomes.
Further, average decreases in victimization and perpetration outcomes might underestimate the true effect of the programs because receiving an intervention could raise subjects’ awareness of experiences in their own lives that qualify as violence. If this is true, then even if treatments reduce sexual violence, treated subjects are systematically more likely to report. There is a dramatic illustration in Hirsch and Khan (2020) of a young man who realizes in a research setting that he once committed an assault (pp. 60–61). Recently, Boyer et al. (2022) reported evidence that asking about intimate-partner violence in a baseline study can increase the frequency of reporting more violence later, which they interpreted as causing participants to note and remember violence more in the future. If experiences like this are common, then these quantitative results might be underestimating the efficacy of available programs.
On the other hand, a pessimist might respond by pointing out that outcomes measuring ideas about sexual violence might be overestimating the effect of interventions. A typical intervention in this literature seeks to educate participants about rape myths and how to recognize sexually violent behavior and then asks them to take a questionnaire about rape myths and what counts as sexually violent behavior. There is reason to worry about social desirability: It is clear what answers the administrators of the survey are hoping for. This might explain why we did not observe a strong correlation between changes in ideas about sexual violence and acts of sexual violence. Further, although behavioral outcomes tended to be larger in randomized controlled trials, changes in ideas were greatest in observational research (Δ = 0.509 vs. Δ = 0.318 for randomized controlled trials and Δ = 0.287 for quasi-experimental designs).
Last, a pessimist might argue that the most straightforward explanation for why there were average reductions (Δ) of just 0.032 for perpetration and 0.046 for victimization is that we do not know how to change the rates of either. This interpretation is in keeping with evidence that overall rates of sexual violence have not changed between 1985 and 2015, a period that includes a majority of the interventions included here (Koss et al., 2022).
Limitations of this analysis
We note two central limitations of our meta-analysis. First, our data collection stopped in 2018, so we are missing 6 years of subsequent literature. Second, we had to make decisions about which of the many outcomes to code for analysis. We prioritized self-reported behaviors, then bystander behaviors, and then the most common ideas-based measures such as rape myth acceptance. We excluded behavioral intentions, including measures of intentions to be an active bystander. On the basis of prior research (Webb & Sheeran, 2006) and our own views, we decided that measures of intentions would not give us a good idea of what people who participate in these interventions actually go on to do. As mentioned, there is reason to worry about experimenter demand effects, and many things can come between a person’s intentions and their behaviors. Yet we acknowledge that approaches that intervene on attitudes, norms, and self-efficacy beliefs to change intentions can be successful in changing behavior (Sheeran et al., 2016). Finally, it is worth noting that we still think ideas such as rape myths are worth dispelling for their own sake. These ideas are associated with antivictim and prodefendant judgments in lab studies, are prevalent in media coverage of cases of sexual violence, and directly impact victim well-being (Dinos et al., 2015; Franiuk et al., 2008; Kosloski et al., 2018). It just turns out these are two separate issues—changing the one will not necessarily change the other.
A Behavioral Problem
We can now see from a bird’s-eye view a substantial problem in the current state of primary prevention strategies for sexual violence. We have very little evidence that current interventions change behavior, especially perpetration behavior, which is their primary aim. We also have something of a shifted goalpost: Many interventions target bystander behavior, and bystander interventions do not appear to change rates of perpetration or victimization. We do have promising evidence that interventions can change problematic attitudes, beliefs, and norm perceptions related to sexual violence. Unfortunately, we do not have strong evidence to suggest that the changes in attitudes, norms, and beliefs that result from the tested interventions have positive downstream consequences for preventing perpetration. In fact, the weak and nonsignificant association between changed ideas and changed behavior observed in this data calls for rethinking traditional approaches that prioritize changing how people think about a social problem as a way to change how they behave. To better understand sexual violence as a behavioral problem, we first discuss the difficulty of measuring behavior in this space and then point to promising future avenues for measurement and intervention that serve as alternatives to an ideas-based approach.
Problems with behavioral measures
As mentioned above, numerous prior reviews (and this one) have lamented the lack of behavioral outcomes in the assessment of interventions. Directly measuring behavior is particularly difficult in the area of sexual violence for many reasons. First, direct observation is often impossible or unethical. Many sexual assaults occur with no outside witnesses when the perpetrator and victim are alone or in a household setting in which witnesses are often minors. Even if sexual violence takes place somewhere that direct observation is possible (e.g., public space, posted publicly online) there are ethical concerns about researchers failing to intervene, and third parties do not always agree about whether the events in question are consensual.
Typically, researchers rely on self-report. There are a few problems with this approach. First, there are the typical issues surrounding self-report (i.e., difficulty in remembering, concerns about demand effects, and social desirability). In addition, many sexual assaults go unreported (e.g., to the police; Thompson & Tapp, 2022). There are many reasons why. One concerns construal. At the psychological level, people differ in their personal interpretation of events of nonconsensual touching, and sometimes victims themselves do not characterize the behavior as violent until later or ever. For example, among college students conceptualizations of consent are multifaceted (e.g., distinguishing wanting from consent; Peterson & Muehlenhard, 2007), and whether an event is construed as rape or assault can be related to an endorsement of particular rape myths about resistance (Peterson & Muehlenhard, 2004) and how closely the event matches schemas of “prototypical” sexual violence that usually involves strangers and physical violence (Bondurant, 2001; Littleton et al., 2009; McKimmie et al., 2014), although they are not the most common features of assault (Koss et al., 1988; Planty et al., 2013; Sanday, 1997). Even when sexual violence occurs, it may not be construed that way (Bondurant, 2001; Edwards et al., 2014).
Reporting is also disincentivized. Even when victims identify events as violent and do want to report the event there are further barriers. Victims often report that the process of officially reporting the event and the investigation that follows constitute “secondary victimization,” in which experiences of victim blaming and discrimination exacerbate the trauma associated with the assault (Campbell & Raja, 2005; Jackson et al., 2017). Other victims hear about this, and it decreases their motivation to report when they have been sexually assaulted. In addition, oftentimes reporting conditions within organizations are weak in the sense that very few people know about them and have trust in them. The victim may feel threatened by the perpetrator and fear repercussions for reporting and may believe that law-enforcement officials cannot or will not do anything to help (Planty et al., 2013). In other words, people may (rightfully) fear institutional betrayal (Smith & Freyd, 2014). Victims also sometimes report reluctance to get the perpetrator in trouble (Planty et al., 2013) and express a preference for restorative over retributive justice practices (Koss, 2014). This makes the use of official reporting part of, but not the whole, story.
The use of official reporting rates is further complicated by the likelihood that factors that contribute to the prevalence of sexual violence also contribute to barriers to reporting. For example, rates of reporting may be lower in places in which men occupy more positions of power. This may explain why interventions aimed at reducing sexual violence can also improve reporting conditions. Imagine a college campus or workplace with high rates of sexual violence. If the workplace officially hosts programming aimed at reducing sexual violence this can signal to employees that their workplace may be more willing to take their reports seriously. In this way, interventions can actually make rates of reporting go up. This in itself can be a good outcome and may not signal that actual rates of violence have changed in any or either direction. It does, however, complicate assessing the efficacy of a given intervention in terms of official reports and could even disincentivize assessment.
Ideas for future programs
In our comprehensive review of primary prevention interventions, we have come to think of sexual violence as a behavioral problem without a theory of behavior. From a psychological perspective, it is easy to see sexual violence as a behavioral problem. Yet we do not currently have many approaches that aim to solve the problem of sexual violence with interventions that target behavior change. Instead, we predominantly see efforts to change attitudes, beliefs, norms, and knowledge surrounding sexual violence with the assumption that behavior change will follow.
What do theories of behavior change look like? Interventions that take a behavioral approach may do so by considering features of the environment. Physical spaces communicate local norms and expectations (Gantman & Paluck, 2022; S. McMahon et al., 2022) as well as make some behaviors easier than others to enact. Those features could be geographical configurations such as a lack of common social space (Gantman & Paluck, 2022). An excellent example of this approach is the Shifting Boundaries intervention (Taylor et al., 2015, 2013) implemented in New York City middle schools. The intervention involved the introduction of building-based restraining orders, placement of posters to increase awareness and reporting of violence, and a joint effort by school students and teachers to identify physical hot spots where a greater presence of faculty or school personnel was needed. Taylor et al. (2013) reported that receiving this treatment alone was associated with significant reductions in the reported frequency of both sexual-harassment victimization and perpetration. Another idea that is more relevant to the college setting is based on the finding that college students do not have enough private spaces to interact that are not bedrooms (Hirsch & Khan, 2020). Therefore, future work could quantitatively evaluate the impact of introducing more public spaces to reduce sexual violence on campus. Interventions may also consider who is afforded power in a given situation, such as who owns the physical space, who has more money, who is driving home, and how the physical space communicates who is valued (Gantman & Paluck, 2022). For example, fraternities with poorly kept women’s restrooms were also places where sexual violence was more likely to occur (Boswell & Spade, 1996). We recommend a focus on geographical and other aspects of physical spaces as one untapped area for future theorizing and intervention programming that targets behavior change.
It is also worth considering interventions that do not mention sexual violence or even primarily target it. Our review encompassed only programs that explicitly sought to reduce sexual violence. We made this decision because we assume that these are the programs that people who are looking for a program to reduce sexual violence in their school or workplace encounter. But interventions that do not explicitly target sexual violence have reduced violence; for instance, researchers have found that decriminalizing sex work is associated with lower rates of sex crimes in Rhode Island (Cunningham & Shah, 2018) and the Netherlands (Bisschop et al., 2017). Meanwhile, Haushofer et al. (2019) found that unconditional cash transfers distributed to women in Kenya reduced sexual violence by 0.22 standard deviations, which was statistically significant, whereas cash transfers given to men reduced sexual violence by 0.10 standard deviations, which was not; these observed effect sizes are much larger than the average perpetration outcomes observed in the literature on primary prevention. Note that when we evaluate interventions that do not primarily target ideas about sexual violence, we might measure them not as a proxy for success of the intervention itself but as a measure of its downstream consequences, possibly the downstream consequences of reducing sexual violence itself.
Intervening on physical configurations or without mentioning sexual violence explicitly may also be a promising avenue for preventing reactance, especially among at-risk men (Malamuth et al., 2018). It may also be a fruitful approach given the relationship between violence perpetration and personality variables that are presumably harder to change Malamuth et al. (2021).
Ideas for future measurement
We began this section with a warning about the difficulties of measuring behavior. Now we are calling on researchers to take up a more behavior-centric approach to the reduction of sexual violence. How do we reconcile these issues? We cannot claim to solve the problem, but we offer some suggestions.
We suggest researchers attempt to triangulate on rates of perpetration and victimization through multiple avenues of measurement. When testing interventions conducted in a college campus, researchers can collect complaints filed with the university’s Title IX offices, crime data observed by police on campus grounds, and self-reports of perpetration and victimization. Ciacci and Sviatschi (2022) cross-referenced “stop-and-frisk” data from the New York Police Department (NYPD) with two versions of complainant data sets from the NYPD to assess the frequency of sex crimes. Although these behavioral data may be incomplete and imprecise, the authors used multiple measures to converge and corroborate evidence in a single evaluation. Researchers may find that these different sources provide disparate pictures of what is really happening because rates of sexual violence derived from official reporting mechanisms lag drastically behind self-report. Institutions can also improve their apparatus for reporting. They can work to improve institutional signals that victims will be believed and supported when they report (i.e., increase their institutional courage; Freyd, n.d.), such as a smartphone app with time-stamped entries, optional official reporting functions, and the potential to identify repeat offenders (see https://www.projectcallisto.org). Scholars studying this must take into account that at least in the short term, when reporting mechanisms improve, measures may go up rather than down because the intervention can make victims feel more positive about reporting behavior than before. Increases in reporting after a primary prevention intervention could be seen as a positive outcome, demonstrating the intervention’s success.
It is also important to be mindful of local events and their relationship to rates of sexual violence. For example, on college campuses, sexual violence is highest among freshman students in the period between the beginning of school year up to Thanksgiving (Cranney, 2015), and events on campus such as Division I football games increase daily reports of rape (Lindo et al., 2018). Finally, it is also possible to better assess the relationship between changing ideas about sexual violence and changing behavior. For example, Sharma (2022) tested a sexual-harassment training aimed at men in India and used an innovative mix of women reporting on the harassment behaviors of men in their class, trust games, direct questions, and hypothetical scenarios to elicit opinions. Last, Schuster et al. (2022) combined self-reported behavioral data at multiple time points with “risky sexual scripts” and open-ended questions to provide a fuller picture of how and when ideas about sexual violence and sexual violence itself vary together. Although any given measure of perpetration behavior has its problems, researchers can combine multiple measures to get a better picture of how sexual violence is changing (or not) in response to an intervention.
Conclusion
In sum, we have reviewed the efficacy of primary prevention interventions to reduce sexual violence. We have found that the majority of interventions take an ideas-based approach—seeking to change rates of sexual violence by changing attitudes, knowledge, perceived norms, and beliefs about sexual violence. We identified trends in the literature over time, especially a shift to bystander interventions that seek to reduce violence by increasing bystander behavior. Our quantitative review found that current interventions are quite successful at changing ideas about sexual violence but are less successful at changing rates of perpetration. We offered some ideas for future directions for both intervening on and measuring behavior. We hope in the future to see more psychological research on the prevention of sexual violence, a behavioral problem without a behaviorally informed solution.
Supplemental Material
Supplemental material, sj-pdf-1-psi-10.1177_15291006231221978 for Preventing Sexual Violence: A Behavioral Problem Without a Behaviorally Informed Solution by Roni Porat, Ana Gantman, Seth A. Green, John-Henry Pezzuto and Elizabeth Levy Paluck in Psychological Science in the Public Interest
Acknowledgments
This article would not have been possible without the superb research assistance of Alison Goldberg, Alex Sanchez, Arielle Mindel, Margo Berends, Yashna Gungadurdoss, Matej Jungwirth, Paloma Bellatin, Varsha Gandikota, Somya Bajaj, Anne Kuhnen, Aditi Bhowmick, Jennifer K. Johnson, and Manna Selassie. We thank Martin Samuel Devaux for reviewing our code and Daniel Waldinger for his feedback on the figures. We thank Donald P. Green for providing guidance on the statistical estimator. We would also like to thank Sarah DeGue for sharing materials from her influential review.
Footnotes
ORCID iD: Seth A. Green
https://orcid.org/0000-0003-3909-1969
Supplemental Material: Additional supporting information can be found at http://journals.sagepub.com/doi/suppl/10.1177/15291006231221978
Transparency
Editor: Nora S. Newcombe
The author(s) declared that there were no conflicts of interest with respect to the authorship or the publication of this article.
Funding: This work was supported by funding from Princeton University (to E. L. Paluck).
References
- Ajzen I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179–211. [Google Scholar]
- Ajzen I., Albarracín D., Hornik R. (2007). Prediction and change of health behavior: Applying the reasoned action approach. Lawrence Erlbaum Associates. [Google Scholar]
- Albarracín D., McNatt P. S., Klein C. T. F., Ho R. M., Mitchell A. L., Kumkale G. T. (2003). Persuasive communications to change actions: An analysis of behavioral and cognitive impact in HIV prevention. Health Psychology, 22(2), 166–177. 10.1037//0278-6133.22.2.166 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson L. A., Whiston S. C. (2005). Sexual assault education programs: A meta-analytic examination of their effectiveness. Psychology of Women Quarterly, 29(4), 374–388. [Google Scholar]
- Arango D. J., Morton M., Gennari F., Kiplesund S., Ellsberg M. (2014). Interventions to prevent or reduce violence against women and girls: A systematic review of reviews. The World Bank. [Google Scholar]
- Avery-Leaf S., Cascardi M., O’leary K. D., Cano A. (1997). Efficacy of a dating violence prevention program on attitudes justifying aggression. Journal of Adolescent Health, 21(1), 11–17. [DOI] [PubMed] [Google Scholar]
- Banyard V. L. (2011). Who will help prevent sexual violence: Creating an ecological model of bystander intervention. Psychology of Violence, 1(3), 216–229. [Google Scholar]
- Banyard V. L., Moynihan M. M., Plante E. G. (2007). Sexual violence prevention through bystander education: An experimental evaluation. Journal of Community Psychology, 35(4), 463–481. [Google Scholar]
- Basile K. C., DeGue S., Jones K., Freire K., Dills J., Smith S. G., Raiford J. L. (2016). Sexual violence prevention: Resource for action. Centers for Disease Control and Prevention. [Google Scholar]
- Basile K. C., Smith S. G., Kresnow M.-J., Khatiwada S., Leemis R. (2022). The national intimate partner and sexual violence survey: 2016/2017 report on sexual violence. Centers for Disease Control and Prevention. [Google Scholar]
- Bisschop P., Kastoryano S., van der Klaauw B. (2017). Street prostitution zones and crime. American Economic Journal: Economic Policy, 9(4), 28–63. [Google Scholar]
- Black M. C., Basile K. C., Breiding M. J., Smith S. G., Walters M. L., Merrick M. T., Stevens M. (2011). National intimate partner and sexual violence survey. Centers for Disease Control and Prevention. [Google Scholar]
- Bondurant B. (2001). University women’s acknowledgement of rape: Individual, situational, and social factors. Violence Against Women, 7(3), 294–314. 10.1177/10778012201007003004 [DOI] [Google Scholar]
- Boswell A. A., Spade J. Z. (1996). Fraternities and collegiate rape culture: Why are some fraternities more dangerous places for women? Gender & Society, 10, 133–147. [Google Scholar]
- Bouchard J., Wong J. S., Lee C. (2023). Fostering college students’ responsibility as prosocial bystanders to sexual violence prevention: A meta-analysis of the bringing in the bystander program. Journal of American College Health. Advance online publication. 10.1080/07448481.2022.2162825 [DOI] [PubMed] [Google Scholar]
- Boyer C., Levy Paluck E., Annan J., Nevatia T., Cooper J., Namubiru J., Heise L., Lehrer R. (2022). Religious leaders can motivate men to cede power and reduce intimate partner violence: Experimental evidence from Uganda. Proceedings of the National Academy of Sciences, USA, 119(31), Article e2200262119. 10.1073/pnas.2200262119 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Briere J., Malamuth N., Check J. V. P. (1985). Sexuality and rape-supportive beliefs. International Journal of Women’s Studies, 8(4), 398–403. [Google Scholar]
- Bronfenbrenner U. (2005). Making human beings human: Bioecological perspectives on human development. Sage. [Google Scholar]
- Brownmiller S. (2005). Against our will: Men, women and rape. Pearson Education New Zealand. (Original work published 1975) [Google Scholar]
- Burt M. R. (1980). Cultural myths and supports for rape. Journal of Personality and Social Psychology, 38(2), 217–230. 10.1037//0022-3514.38.2.217 [DOI] [PubMed] [Google Scholar]
- Campbell R., Raja S. (2005). The sexual assault and secondary victimization of female veterans: Help-seeking experiences with military and civilian social systems. Psychology of Women Quarterly, 29(1), 97–106. [Google Scholar]
- Canan S. N., Jozkowski K. N., Crawford B. L. (2018). Sexual assault supportive attitudes: Rape myth acceptance and token resistance in Greek and non-Greek college students from two university samples in the United States. Journal of Interpersonal Violence, 33(22), 3502–3530. 10.1177/0886260516636064 [DOI] [PubMed] [Google Scholar]
- Casey E. A., Lindhorst T. P. (2009). Toward a multi-level, ecological approach to the primary prevention of sexual assault: Prevention in peer and community contexts. Trauma, Violence, & Abuse, 10(2), 91–114. [DOI] [PubMed] [Google Scholar]
- Centers for Disease Control. (2022). Fast facts: Preventing sexual violence. https://www.cdc.gov/violenceprevention/sexualviolence/fastfact.html
- Chapleau K. M., Oswald D. L. (2010). Power, sex, and rape myth acceptance: Testing two models of rape proclivity. The Journal of Sex Research, 47(1), 66–78. 10.1080/00224490902954323 [DOI] [PubMed] [Google Scholar]
- Ciacci R., Sviatschi M. M. (2022). The effect of adult entertainment establishments on sex crime: Evidence from New York City. The Economic Journal, 132(641), 147–198. [Google Scholar]
- Coker A. L., Bush H. M., Fisher B. S., Swan S. C., Williams C. M., Clear E. R., DeGue S. (2016). Multi-college bystander intervention evaluation for violence prevention. American Journal of Preventive Medicine, 50(3), 295–302. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Coker A. L., Cook-Craig P. G., Williams C. M., Fisher B. S., Clear E. R., Garcia L. S., Hegge L. M. (2011). Evaluation of green dot: An active bystander intervention to reduce sexual violence on college campuses. Violence Against Women, 17(6), 777–796. [DOI] [PubMed] [Google Scholar]
- Connell N., Wilson C. (1974). Rape: The first sourcebook for women. The New American Library. [Google Scholar]
- Cooke R., Sheeran P. (2004). Moderation of cognition-intention and cognition-behaviour relations: A meta-analysis of properties of variables from the theory of planned behaviour. British Journal of Social Psychology, 43(2), 159–186. [DOI] [PubMed] [Google Scholar]
- Cooper H., Hedges L. V., Valentine J. C. (2019). The handbook of research synthesis and meta-analysis. Russell Sage Foundation. [Google Scholar]
- Cranney S. (2015). The relationship between sexual victimization and year in school in us colleges: Investigating the parameters of the “red zone.” Journal of Interpersonal Violence, 30(17), 3133–3145. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cunningham S., Shah M. (2018). Decriminalizing indoor prostitution: Implications for sexual violence and public health. The Review of Economic Studies, 85(3), 1683–1715. [Google Scholar]
- Darlington E. (2014). Decreasing misperceptions of sexual violence to increase bystander intervention: A social norms intervention [Doctoral dissertation, University of Oregon]. Scholars’ Bank. https://scholarsbank.uoregon.edu/xmlui/handle/1794/18489 [Google Scholar]
- Davis K. C., Gilmore A. K., Stappenbeck C. A., Balsan M. J., George W. H., Norris J. (2014). How to score the sexual experiences survey? A comparison of nine methods. Psychology of Violence, 4(4), 445–461. [DOI] [PMC free article] [PubMed] [Google Scholar]
- DeGue S., Simon T. R., Basile K. C., Yee S. L., Lang K., Spivak H. (2012). Moving forward by looking back: Reflecting on a decade of CDC’s work in sexual violence prevention, 2000–2010. Journal of Women’s Health, 21(12), 1211–1218. [DOI] [PMC free article] [PubMed] [Google Scholar]
- DeGue S., Valle L. A., Holt M. K., Massetti G. M., Matjasko J. L., Tharp A. T. (2014). A systematic review of primary prevention strategies for sexual violence perpetration. Aggression and Violent Behavior, 19(4), 346–362. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dinos S., Burrowes N., Hammond K., Cunliffe C. (2015). A systematic review of juries’ assessment of rape victims: Do rape myths impact on juror decision-making? International Journal of Law, Crime and Justice, 43(1), 36–49. [Google Scholar]
- Edwards S. R., Bradshaw K. A., Hinsz V. B. (2014). Denying rape but endorsing forceful intercourse: Exploring differences among responders. Violence and Gender, 1(4), 188–193. 10.1089/vio.2014.0022 [DOI] [Google Scholar]
- Ellsberg M., Arango D. J., Morton M., Gennari F., Kiplesund S., Contreras M., Watts C. (2015). Prevention of violence against women and girls: What does the evidence say? The Lancet, 385 (9977), 1555–1566. 10.1016/s0140-6736(14)61703-7 [DOI] [PubMed] [Google Scholar]
- Feltey K. M., Ainslie J. J., Geib A. (1991). Sexual coercion attitudes among high school students: The influence of gender and rape education. Youth & Society, 23(2), 229–250. [Google Scholar]
- Fischer G. J. (1986). College student attitudes toward forcible date rape: Changes after taking a human sexuality course. Journal of Sex Education and Therapy, 12(1), 42–46. [Google Scholar]
- Fishbach A., Friedman R. S., Kruglanski A. W. (2003). Leading us not into temptation: Momentary allurements elicit overriding goal activation. Journal of Personality and Social Psychology, 84(2), 296–309. [PubMed] [Google Scholar]
- Fisher B. S., Daigle L. E., Cullen F. T. (2010). Unsafe in the ivory tower: The sexual victimization of college women. Sage. [Google Scholar]
- Foshee V. A., Bauman K. E., Arriaga X. B., Helms R. W., Koch G. G., Linder G. F. (1998). An evaluation of safe dates, an adolescent dating violence prevention program. American Journal of Public Health, 88(1), 45–50. 10.2105/ajph.88.1.45 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Foshee V. A., Bauman K. E., Ennett S. T., Linder G. F., Benefield T., Suchindran C. (2004). Assessing the long-term effects of the safe dates program and a booster in preventing and reducing adolescent dating violence victimization and perpetration. American Journal of Public Health, 94(4), 619–624. 10.2105/ajph.94.4.619 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Foshee V. A., Bauman K. E., Ennett S. T., Suchindran C., Benefield T., Linder G. F. (2005). Assessing the effects of the dating violence prevention program “safe dates” using random coefficientregression modeling. Prevention Science, 6(3), 245–258. 10.1007/s11121-005-0007-0 [DOI] [PubMed] [Google Scholar]
- Foshee V. A., Bauman K. E., Greene W. F., Koch G., Linder G., MacDougall J. (2000). The Safe Dates program: 1-year follow-up results. American Journal of Public Health, 90(10), 1619–1622. 10.2105/ajph.90.10.1619 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Foshee V. A., Linder G. F., Bauman K. E., Langwick S. A., Arriaga X. B., Heath J. L., McMahon P. M., Bangdiwala S. (1996). The Safe Dates Project: Theoretical basis, evaluation design, and selected baseline findings. American Journal of Preventive Medicine, 12(5), 39–47. [PubMed] [Google Scholar]
- Foshee V. A., Mcnaughton Reyes H. L., Ennett S. T., Cance J. D., Bauman K. E., Bowling J. M. (2012). Assessing the effects of families for safe dates, a family-based teen dating abuse prevention program. Journal of Adolescent Health, 51(4), 349–356. 10.1016/j.jadohealth.2011.12.029 [DOI] [PubMed] [Google Scholar]
- Foubert J. (2000). The longitudinal effects of a rape-prevention program on fraternity men’s attitudes, behavioral intent, and behavior. Journal of American College Health, 48(4), 158–163. [DOI] [PubMed] [Google Scholar]
- Foubert J., Cowell E. A. (2004). Perceptions of a rape prevention program by fraternity men and male student athletes: Powerful effects and implications for changing behavior. Journal of Student Affairs Research and Practice, 42(1), 1–20. 10.2202/1949-6605.1411 [DOI] [Google Scholar]
- Foubert J., Cremedy B. (2007). Reactions of men of color to a commonly used rape prevention program: Attitude and predicted behavior changes. Sex Roles, 57(1-2), 137–144. 10.1007/s11199-007-9216-2 [DOI] [Google Scholar]
- Foubert J., La Voy S. (2000). A qualitative assessment of “The Men’s Program”: The impact of a rape prevention program on fraternity men. Journal of Student Affairs Research and Practice, 38, 18–30. 10.2202/1949-6605.1120 [DOI] [Google Scholar]
- Foubert J., Marriott K. (1996). Overcoming men’s defensiveness toward sexual assault programs: Learning to help survivors. Journal of College Student Development, 37, 470–472. [Google Scholar]
- Foubert J., Tatum J., Donahue G. (2006). Reactions of first-year men to rape prevention program: Attitude and predicted behavior changes. Journal of Student Affairs Research and Practice, 43, 578–598. 10.2202/1949-6605.1684 [DOI] [Google Scholar]
- Franco A., Malhotra N., Simonovits G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502–1505. [DOI] [PubMed] [Google Scholar]
- Franiuk R., Seefelt J. L., Vandello J. A. (2008). Prevalence of rape myths in headlines and their effects on attitudes toward rape. Sex Roles, 58, 790–801. [Google Scholar]
- Freyd J. J. (n.d.). Institutional betrayal to institutional courage. https://dynamic.uoregon.edu/jjf/institutionalbetrayal
- Gage A. J., Honoré J., Deleon J. (2016). Short-term effects of a violence-prevention curriculum on knowledge of dating violence among high school students in Port-au-Prince, Haiti. Journal of Communication in Healthcare, 9(3), 178–189. 10.1080/17538068.2016.1205300 [DOI] [Google Scholar]
- Gantman A. P., Paluck E. L. (2022). A behavioral-science framework for understanding college campus sexual assault. Perspectives on Psychological Science, 17(4), 979–994. 10.1177/17456916211030264 [DOI] [PubMed] [Google Scholar]
- Gidycz C. A., Orchowski L. M., Berkowitz A. D. (2011). Preventing sexual aggression among college men: An evaluation of a social norms and bystander intervention program. Violence Against Women, 17(6), 720–742. [DOI] [PubMed] [Google Scholar]
- Gillies R. A. (1997). Providing direct counter-arguments to challenge male audiences’ attitudes toward rape (Publication No. 9841291) [Doctoral dissertation, University of Missouri]. ProQuest Dissertations and Theses Global.
- Glasman L. R., Albarracín D. (2006). Forming attitudes that predict future behavior: A meta-analysis of the attitude-behavior relation. Psychological Bulletin, 132(5), 778–822. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harari H., Harari O., White R. V. (1985). The reaction to rape by American male bystanders. The Journal of Social Psychology, 125(5), 653–658. [DOI] [PubMed] [Google Scholar]
- Haushofer J., Ringdal C., Shapiro J. P., Wang X. Y. (2019). Income changes and intimate partner violence: Evidence from unconditional cash transfers in Kenya (Working Paper No. 25627). National Bureau of Economic Research. http://www.nber.org/papers/w25627 [Google Scholar]
- Hirsch J. S., Khan S. (2020). Sexual citizens: A landmark study of sex, power, and assault on campus. W. W. Norton & Company. [Google Scholar]
- Hostler M. J. (2014, February 24). Making campuses safer for women. The Wall Street Journal. https://www.wsj.com/articles/monika-johnson-hostler-making-campuses-safer-for-women-1393288017
- Jackson M. A., Valentine S. E., Woodword E. N., Pantalone D. W. (2017). Secondary victimization of sexual minority men following disclosure of sexual assault: “Victimizing me all over again. . .” Sexuality Research and Social Policy, 14, 275–288. [Google Scholar]
- Jewkes R., Nduna M., Levin J., Jama N., Dunkle K., Khuzwayo N., Koss M., Puren A., Wood K., Duvvury N. (2006). A cluster randomized-controlled trial to determine the effectiveness of stepping stones in preventing HIV infections and promoting safer sexual behaviour amongst youth in the rural Eastern Cape, South Africa: Trial design, methods and baseline findings. Tropical Medicine & International Health, 11(1), 3–16. [DOI] [PubMed] [Google Scholar]
- Jewkes R., Nduna M., Levin J., Jama N., Dunkle K., Puren A., Duvvury N. (2008). Impact of stepping stones on incidence of HIV and HSV-2 and sexual behaviour in rural South Africa: Cluster randomised controlled trial. BMJ, 337, Article a506. 10.1136/bmj.a506 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kelly J. (2006). Becoming ecological. Oxford University Press. [Google Scholar]
- Kettrey H. H., Marx R. A. (2019). Does the gendered approach of bystander programs matter in the prevention of sexual assault among adolescents and college students? A systematic review and meta-analysis. Archives of Sexual Behavior, 48(7), 2037–2053. 10.1007/s10508-019-01503-1 [DOI] [PubMed] [Google Scholar]
- Kettrey H. H., Thompson M. P., Marx R. A., Davis A. J. (2023). Effects of campus sexual assault prevention programs on attitudes and behaviors among American college students: A systematic review and meta-analysis. Journal of Adolescent Health, 72(6), 831–844. 10.1016/j.jadohealth.2023.02.022 [DOI] [PubMed] [Google Scholar]
- Kleinsasser A., Jouriles E. N., McDonald R., Rosenfield D. (2015). An online bystander intervention program for the prevention of sexual violence. Psychology of Violence, 5(3), 227–235. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kosloski A. E., Diamond-Welch B. K., Mann O. (2018). The presence of rape myths in the virtual world: A qualitative textual analysis of the Steubenville sexual assault case. Violence and Gender, 5(3), 166–173. [Google Scholar]
- Koss M. P. (2014). The restore program of restorative justice for sex crimes: Vision, process, and outcomes. Journal of Interpersonal Violence, 29(9), 1623–1660. [DOI] [PubMed] [Google Scholar]
- Koss M. P., Abbey A., Campbell R., Cook S., Norris J., Testa M., Ullman S., West C., White J. (2007). Revising the SES: A collaborative process to improve assessment of sexual aggression and victimization. Psychology of Women Quarterly, 31(4), 357–370. 10.1111/j.1471-6402.2007.00385.x [DOI] [Google Scholar]
- Koss M. P., Anderson R. A., Peterson Z. D., Littleton H., Abbey A., Kowalski R., Swartout K. (2023). The revised victimization version of the sexual experiences survey (SES-V): Conceptualization, modifications, items and scoring [Manuscript submitted for publication]. Health Promotion Sciences Department, University of Arizona. [Google Scholar]
- Koss M. P., Dinero T. E., Seibel C. A., Cox S. L. (1988). Stranger and acquaintance rape: Are there differences in the victim’s experience? Psychology of Women Quarterly, 12(1), 1–24. [Google Scholar]
- Koss M. P., Gidycz C. A., Wisniewski N. (1987). The scope of rape: Incidence and prevalence of sexual aggression and victimization in a national sample of higher education students. Journal of Consulting and Clinical Psychology, 55(2), 162–170. 10.1037/0022-006x.55.2.162 [DOI] [PubMed] [Google Scholar]
- Koss M. P., Oros C. J. (1982). Sexual experiences survey: A research instrument investigating sexual aggression and victimization. Journal of Consulting and Clinical Psychology, 50(3), 455–457. [DOI] [PubMed] [Google Scholar]
- Koss M. P., Swartout K. M., Lopez E. C., Lamade R. V., Anderson E. J., Brennan C. L., Prentky R. A. (2022). The scope of rape victimization and perpetration among national samples of college students across 30 years. Journal of Interpersonal Violence, 37(1–2), NP25–NP47. [DOI] [PubMed] [Google Scholar]
- Krug E. G., Dahlberg L. L., Mercy J. A., Zwi A. B., Lozano R. (Eds.). (2002). Violence: A global public health problem. In World report on violence and health (pp. 3–21). World Health Organization. [Google Scholar]
- Krug E. G., Mercy J. A., Dahlberg L. L., Zwi A. B. (2002). The world report on violence and health. The Lancet, 360(9339), 1083–1088. [DOI] [PubMed] [Google Scholar]
- Lindo J. M., Siminski P., Swensen I. D. (2018). College party culture and sexual assault. American Economic Journal: Applied Economics, 10(1), 236–265. https://www.aeaweb.org/articles?id=10.1257/app.20160031 [Google Scholar]
- Littleton H., Tabernik H., Canales E. J., Backstrom T. (2009). Risky situations or harmless fun? A qualitative examination of college women’s bad hook-up and rape scripts. Sex Roles, 60(45242), 793–804. 10.1007/s11199-009-9586-8 [DOI] [Google Scholar]
- Malamuth N. M., Huppin M., Linz D. (2018). Sexual assault interventions may be doing more harm than good with high-risk males. Aggression and Violent Behavior, 41, 20–24. [Google Scholar]
- Malamuth N. M., Lamade R. V., Koss M. P., Lopez E., Seaman C., Prentky R. (2021). Factors predictive of sexual violence: Testing the four pillars of the confluence model in a large diverse sample of college men. Aggressive Behavior, 47(4), 405–420. [DOI] [PubMed] [Google Scholar]
- McKimmie B. M., Masser B. M., Bongiorno R. (2014). What counts as rape? The effect of offense prototypes, victim stereotypes, and participant gender on how the complainant and defendant are perceived. Journal of Interpersonal Violence, 29(12), 2273–2303. [DOI] [PubMed] [Google Scholar]
- McMahon P. M., Goodwin M. M., Stringer G. (2000). Sexual violence and reproductive health. Maternal and Child Health Journal, 4(2), 121–124. 10.1023/a:1009574305310 [DOI] [PubMed] [Google Scholar]
- McMahon S., Banyard V. L., Peterson N. A., Cusano J., Brown Q. L., Farmer A. Y. (2022). Physical spaces for campus sexual violence prevention: A conceptual model. Journal of Prevention and Health Promotion, 3(3), 347–378. [Google Scholar]
- Moher D., Liberati A., Tetzlaff J., Altman D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Annals of Internal Medicine, 151(4), 264–269. [DOI] [PubMed] [Google Scholar]
- Mousa S. (2020). Building social cohesion between Christians and Muslims through soccer in post-ISIS Iraq. Science, 369(6505), 866–870. [DOI] [PubMed] [Google Scholar]
- Moynihan M. M., Banyard V. L., Arnold J. S., Eckstein R. P., Stapleton J. G. (2010). Engaging intercollegiate athletes in preventing and intervening in sexual and intimate partner violence. Journal of American College Health, 59(3), 197–204. 10.1080/07448481.2010.502195 [DOI] [PubMed] [Google Scholar]
- Mujal G. N., Taylor M. E., Fry J. L., Gochez-Kerr T. H., Weaver N. L. (2021). A systematic review of bystander interventions for the prevention of sexual violence. Trauma, Violence, & Abuse, 22(2), 381–396. [DOI] [PubMed] [Google Scholar]
- Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), Article aac4716. https://www.science.org/doi/10.1126/science.aac4716 [DOI] [PubMed] [Google Scholar]
- Page M. J., McKenzie J. E., Bossuyt P. M., Boutron I., Hoffmann T. C., Mulrow C. D., Shamseer L., Tetzlaff J. M., Akl E. A., Brennan S. E., Chou R., Glanville J., Grimshaw J. M., Hróbjartsson A., Lalu M. M., Li T., Loder E. W., Mayo-Wilson E., McDonald S., . . . Moher D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372. Article n71. 10.1136/bmj.n71 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Paluck E. L. (2009). Reducing intergroup prejudice and conflict using the media: A field experiment in Rwanda. Journal of Personality and Social Psychology, 96(3), 574–587. [DOI] [PubMed] [Google Scholar]
- Paluck E. L., Green S. A., Green D. P. (2019). The contact hypothesis re-evaluated. Behavioural Public Policy, 3(2), 129–158. [Google Scholar]
- Paluck E. L., Porat R., Clark C. S., Green D. P. (2021). Prejudice reduction: Progress and challenges. Annual Review of Psychology, 72, 533–560. [DOI] [PubMed] [Google Scholar]
- Park S., Kim S.-H. (2023). A systematic review and meta-analysis of bystander intervention programs for intimate partner violence and sexual assault. Psychology of Violence, 13(2), 93–106. https://doi.org/10.1037/ vio0000456 [Google Scholar]
- Payne D. L., Lonsway K. A., Fitzgerald L. F. (1999). Rape myth acceptance: Exploration of its structure and its measurement using the Illinois Rape Myth Acceptance Scale. Journal of Research in Personality, 33(1), 27–68. [Google Scholar]
- Peterson Z. D., Muehlenhard C. L. (2004). Was it rape? The function of women’s rape myth acceptance and definitions of sex in labeling their own experiences. Sex Roles, 51, 129–144. [Google Scholar]
- Peterson Z. D., Muehlenhard C. L. (2007). Conceptualizing the “wantedness” of women’s consensual and nonconsensual sexual experiences: Implications for how women label their experiences with rape. Journal of Sex Research, 44(1), 72–88. [DOI] [PubMed] [Google Scholar]
- Planty M., Langton L., Krebs C., Berzofsky M., Smiley-McDonald H. (2013). Female victims of sexual violence, 1994-2010. U.S. Department of Justice. https://bjs.ojp.gov/content/pub/pdf/fvsv9410.pdf [Google Scholar]
- Pohl J. D. (1990). Adolescent sexual abuse: An evaluation of a perpetrator and victim prevention program [Unpublished doctoral dissertation]. Georgia State University. [Google Scholar]
- Polanin J. R., Hennessy E. A., Tsuji S. (2020). Transparency and reproducibility of meta-analyses in psychology: A meta-review. Perspectives on Psychological Science, 15(4), 1026–1041. [DOI] [PubMed] [Google Scholar]
- R Core Team. (2023). R: A language and environment for statistical computing (Version 4.1) [Computer software]. R Foundation for Statistical Computing. http://www.R-project.org [Google Scholar]
- Rivera A., Mondragón-Sánchez E., Vasconcelos F., Pinheiro P., Ferreira A., Galvão M. (2021). Actions to prevent sexual violence against adolescents: An integrative literature review. Revista Brasileira de Enfermagem, 74, e20190876. 10.1590/0034-7167-2019-0876 [DOI] [PubMed] [Google Scholar]
- Rosenthal R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3), 638–641. [Google Scholar]
- Rutherford A. (2011). Sexual violence against women: Putting rape research in context. Psychology of Women Quarterly, 35(2), 342–347. [Google Scholar]
- Rutherford A. (2017). Surveying rape: Feminist social science and the ontological politics of sexual assault. History of the Human Sciences, 30(4), 100–123. [Google Scholar]
- Ruvacalba Y., Rodriguez A. L., Eaton A. A., Stephens D. P., Madhivanan P. (2022). The effectiveness of American college sexual assault interventions in highly masculine settings: A systematic review and meta-analysis. Aggression and Violent Behavior, 65, Article 101760 10.1016/j.avb.2022.101760 [DOI] [Google Scholar]
- Sanday P. R. (1997). A woman scorned: Acquaintance rape on trial. University of California Press. [Google Scholar]
- Saul S., Taylor K. (2017). Betsy DeVos reverses Obama-era policy on campus sexual assault investigations. The New York Times. https://www.nytimes.com/2017/09/22/us/devos-colleges-sex-assault.html
- Scacco A., Warren S. S. (2018). Can social contact reduce prejudice and discrimination? Evidence from a field experiment in Nigeria. American Political Science Review, 112(3), 654–677. [Google Scholar]
- Schuster I., Tomaszewska P., Krahé B. (2022). A theory-based intervention to reduce risk and vulnerability factors of sexual aggression perpetration and victimization in German university students. The Journal of Sex Research, 60, 1206–1221. [DOI] [PubMed] [Google Scholar]
- Senn C. Y., Eliasziw M., Barata P. C., Thurston W. E., Newby-Clark I. R., Radtke H. L., Hobden K. L. (2015). Efficacy of a sexual assault resistance program for university women. New England Journal of Medicine, 372(24), 2326–2335. [DOI] [PubMed] [Google Scholar]
- Sharma K. (2022). Tackling sexual harassment: Experimental evidence from India. https://www.isid.ac.in/~epu/acegd2022/papers/Karmini_Sharma.pdf
- Sheeran P., Maki A., Montanaro E., Avishai-Yitshak A., Bryan A., Klein W. M., Miles E., Rothman A. J. (2016). The impact of changing attitudes, norms, and self-efficacy on health-related intentions and behavior: A meta-analysis. Health Psychology, 35(11), 1178–1188. [DOI] [PubMed] [Google Scholar]
- Sheeran P., Webb T. L. (2016). The intention-behavior gap. Social and Personality Psychology Compass, 10(9), 503–518. 10.1111/spc3.12265 [DOI] [Google Scholar]
- Simmons J. P., Nelson L. D., Simonsohn U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366. [DOI] [PubMed] [Google Scholar]
- Sinclair J., Sinclair L., Otieno E., Mulinge M., Kapphahn C., Golden N. (2013). A self-defense program reduces the incidence of sexual assault in Kenyan adolescent girls. The Journal of Adolescent Health, 53(3), 374–380. 10.1016/j.jadohealth.2013.04.008 [DOI] [PubMed] [Google Scholar]
- Sinozich S., Langton L. (2014). Rape and sexual assault victimization among college-age females, 1995–2013. U.S. Department of Justice. https://bjs.ojp.gov/library/publications/rape-and-sexual-assault-among-college-age-females-1995-2013 [Google Scholar]
- Smith C. P., Freyd J. J. (2014). Institutional betrayal. American Psychologist, 69(6), 575–587. [DOI] [PubMed] [Google Scholar]
- Swartout K. (2013). The company they keep: How peer networks influence male sexual aggression. Psychology of Violence, 3(2), 157–171. [Google Scholar]
- Taylor B. G., Mumford E., Liu W., Stein N. (2016). Assessing different levels and dosages of the Shifting Boundaries intervention to prevent youth dating violence in New York City middle schools: A randomized control trial. U.S. Department of Justice. https://nij.ojp.gov/library/publications/assessing-different-levels-and-dosages-shifting-boundaries-intervention [Google Scholar]
- Taylor B. G., Mumford E. A., Liu W., Stein N. D. (2017). The effects of different saturation levels of the shifting boundaries intervention on preventing adolescent relationship abuse and sexual harassment. Journal of Experimental Criminology, 13, 79–100. [Google Scholar]
- Taylor B. G., Mumford E. A., Stein N. D. (2015). Effectiveness of “shifting boundaries” teen dating violence prevention program for subgroups of middle school students. Journal of Adolescent Health, 56(2), S20–S26. 10.1016/j.jadohealth.2014.07.004 [DOI] [PubMed] [Google Scholar]
- Taylor B. G., Stein N., Burden F. (2010). The effects of gender violence/harassment prevention programming in middle schools: A randomized experimental evaluation. Violence and Victims, 25(2), 202–223. [DOI] [PubMed] [Google Scholar]
- Taylor B. G., Stein N. D., Mumford E. A., Woods D. (2013). Shifting boundaries: An experimental evaluation of a dating violence prevention program in middle schools. Prevention Science, 14(1), 64–76. [DOI] [PubMed] [Google Scholar]
- Testa M., VanZile-Tamsen C., Livingston J. A., Koss M. P. (2004). Assessing women’s experiences of sexual aggression using the sexual experiences survey: Evidence for validity and implications for research. Psychology of Women Quarterly, 28(3), 256–265. [Google Scholar]
- Thompson A., Tapp S. (2022). Criminal victimization, 2021. U.S. Department of Justice. https://bjs.ojp.gov/library/publications/criminal-victimization-2021 [Google Scholar]
- Trottier D., Benbouriche M., Bonneville V. (2021). A meta-analysis on the association between rape myth acceptance and sexual coercion perpetration. The Journal of Sex Research, 58(3), 375–382. [DOI] [PubMed] [Google Scholar]
- Verbeek M., Weeland J., Luijk M., van de Bongardt D. (2023). Sexual and dating violence prevention programs for male youth: A systematic review of program characteristics, intended psychosexual outcomes, and effectiveness. Archives of Sexual Behavior, 52, 2899–2935. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Viechtbauer W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36(3), 1–48. [Google Scholar]
- Vladutiu C. J., Martin S. L., Macy R. J. (2011). College- or university-based sexual assault prevention programs: A review of program outcomes, characteristics, and recommendations. Trauma, Violence, & Abuse, 12(2), 67–86. 10.1177/1524838010390708 [DOI] [PubMed] [Google Scholar]
- Webb T. A., Sheeran P. (2006). Does changing behavioral intentions engender behavior change? A meta-analysis of the experimental evidence. Psychological Bulletin, 132(2), 249–268. [DOI] [PubMed] [Google Scholar]
- White House Task Force. (2014). Not alone: The first report of the White House Task Force to Protect Students from Sexual Assault. U.S. Government Printing Office. https://www.govinfo.gov/app/details/GOVPUB-PR-PURL-gpo48344 [Google Scholar]
- Wickham H., Averick M., Bryan J., Chang W., McGowan L. D., François R., Grolemund G., Hahes A., Henry L., Hester J., Kuhn M., Pedersen T. L., Miller E., Bache S. M., Müller K., Ooms J., Robinson D., Seidel D. P., Spinu V., . . . Yutani H. (2019). Welcome to the tidyverse. Journal of Open Source Software, 4(43), Article 1686. 10.21105/joss.01686 [DOI] [Google Scholar]
- World Health Organization. (2005). WHO multi-country study on women’s health and domestic violence against women: Initial results on prevalence, health outcomes and women’s responses. https://www.who.int/publications/i/item/924159358X
- World Health Organization. (2010). Preventing intimate partner and sexual violence against women: Taking action and generating evidence. https://www.who.int/publications/i/item/9789241564007 [DOI] [PubMed]
- Wright L. A., Zounlome N. O., Whiston S. C. (2020). The effectiveness of male-targeted sexual assault prevention programs: A meta-analysis. Trauma, Violence, & Abuse, 21(5), 859–869. [DOI] [PubMed] [Google Scholar]
- Zhang Y., Fishbach A. (2010). Counteracting obstacles with optimistic predictions. Journal of Experimental Psychology: General, 139(1), 16–31. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplemental material, sj-pdf-1-psi-10.1177_15291006231221978 for Preventing Sexual Violence: A Behavioral Problem Without a Behaviorally Informed Solution by Roni Porat, Ana Gantman, Seth A. Green, John-Henry Pezzuto and Elizabeth Levy Paluck in Psychological Science in the Public Interest