Abstract
Objective:
Most studies of peer sexual violence (SV) prevention programs for adolescents focus on program outcomes or feasibility and acceptability; few examine how exposure levels or dosage affects impact. The present study examined the effects of attending multiple community-located youth-led prevention events, as compared to attending one or none, on peer violence (PV)-related attitudes and behaviors.
Method:
Middle and high school students (Mage at first wave = 13.7; 53.2% female; 76.5% White; 21.0% Native American) responded to surveys across 3 years. Logistic regression analyses compared students who attended one community-based event, two or more events, and zero events on sexual violence victimization, any other violence perpetration/victimization, social norms, denial of the problem of sexual violence, and bystander behaviors.
Results:
After controlling for exposure to longer prevention leadership training as well as baseline outcome levels, youth who participated in two or more community prevention events showed lower perpetration over time, improved prevention attitudes, and more helpful bystander actions in response to peer sexual violence. No significant differences were noted for attendance at one community-based event.
Conclusions:
Impact of out of school prevention events on youth behavior depends on more vigorous engagement than one-time contacts. Community-based prevention programs can utilize youth-led engagement strategies to help increase youth participation and resulting benefits.
Keywords: peer violence prevention, dosage, youth-led, leadership
Sexual violence (SV) and co-occurring forms of interpersonal violence, including dating violence, sexual harassment, and bullying by peers (referred to collectively as peer violence, or PV), are important public health problems among adolescents. National Youth Risk Behavior Survey (YRBS) data indicate that one in four adolescents report bullying victimization, one in eight report dating violence victimization, and one in ten report sexual violence victimization in the past year (Centers for Disease Control & Prevention, 2019). Prevention programs developed in response to violence among youth have shown important impacts on decreasing perpetration and victimization by changing intermediate outcomes, such as bystander intervention and violence supporting attitudes (Coker et al., 2019; Edwards et al., 2019; Foshee et al., 2005; Miller et al., 2013; Niolon et al., 2019). The majority of these programs take place in schools and are designed and implemented by adults; there are fewer studies of programs that take place in community spaces and that are youth-led (Edwards et al., 2016). These studies provide important foundational knowledge, but focus mainly on the broad question of whether the programs work. The present study explores a community-based prevention program and explores questions about the impact of dose, a facet of implementation. Specifically, the present study examined the impact of participation in zero events versus one event and two or more community peer sexual violence prevention events on adolescents’ prevention attitudes and behaviors, including bystander intervention, victimization, and perpetration.
Implementation science reminds us that there are many barriers to the successful and ongoing use of prevention strategies in communities and different aspects of programs that may be more or less important for leveraging change (Wandersman et al., 2008). One key aspect of implementation is dose needed to produce effects, including the minimum dose of a prevention intervention required to produce intended impacts (Johnson et al., 2018; Proctor et al., 2015). While studies of these variables are common in topics, such as substance use prevention, there are few related to PV prevention, though it is noted as a key question for this field (Banyard et al., 2018; Davidov et al., 2019; Mihalic & Irwin, 2003).
A number of implementation frameworks exist for understanding the conditions under which a prevention program is effective (Proctor et al., 2011) and dose of prevention needed is a key factor (Rowbotham et al., 2019). Dose has been conceptualized both in terms of how much of a program is delivered, or, alternately, how much prevention a participant was exposed to, and has been described as a component of fidelity (Dusenbury et al., 2003). In violence prevention, longer programs tend to produce greater effects, as do programs that include booster sessions to enhance dose (Banyard et al., 2007). Sustainability is also a key implementation outcome, as most prevention practitioners hope that if a strategy is successful, it will be maintained over time. Recent work by Johnson et al. (2018) noted that a depression prevention program’s sustainability may be enhanced by better understanding the minimum program dose needed to produce effects. Dose effects are noted by Nation et al. (2003) in their review of effective components of prevention initiatives. Furthermore, studies of PV prevention show enhanced effects of booster sessions and combined prevention tools (Banyard et al., 2018; Bundy et al., 2011). Yet, given that additional resources are needed to implement such booster sessions and increased doses, it is important to examine whether added effects are gained from multiple, compared to one, follow-up prevention activity.
The present study examined the associations of a youth-led afterschool peer sexual violence prevention program, Youth Voices in Prevention (Youth VIP), with lower levels of peer violence and greater bystander intervention. The primary prevention strategy was a multiday leadership retreat for youth in late middle and high school (Waterman et al., 2021). Youth learned fundamentals of peer sexual violence prevention, including social norms theory and bystander intervention, and practiced social emotional and diffusion of innovation skills. This multiday retreat showed positive impacts on reduction of peer violence perpetration, some increases in helpful bystander intervention behaviors, and improved prevention attitudes in youth who attended compared to their nonattending peers (Edwards et al., 2021). The project was designed to engage youth in follow-up community-based activities after the leadership retreat (open to all youth in the community), such as planning and producing public artwork focused on sexual violence prevention and interactive game nights which taught and reviewed sexual violence bystander skills. The length of follow-up activities varied based on the identified project (e.g., several hours or longer were needed to create the public art, whereas game night was held on a single night for approximately 2 hr). All activities were designed through youth–adult partnerships between youth prevention interns (who may have attended the retreat) and adults in the community. Youth engagement was maintained via paid youth internships, invitations to youth leaders (and encouraging them to bring friends), social media outreach, and incentives like food (see Waterman et al., 2021 for further programming details).
Goals of the Present Study
The present study focused on the brief community events to better understand their value added for youth who attended the foundational leadership retreat, and also to examine the associations of community event exposure on youth who did not attend the retreat. A fundamental question was whether, compared to youth who did not attend any events, outcomes improved among youth who attended only one community event versus multiple events while controlling for exposure to the baseline leadership retreat exposure. The setting is a naturalistic program evaluation design in that youth self-selected event participation.
Our aim was to examine the association between community event dosage and several primary outcomes. The primary outcomes were sexual victimization, overall perpetration, and overall victimization (Aim 1a). Intermediary outcomes were social norms, bystander readiness denial, proactive bystander behavior, and reactive bystander behavior to four peer sexual violence situations (touch, sexual violence, share, rumors; Aim 1b). Dose was measured as zero events versus one event versus two or more events. Specifically, we hypothesized that:
-
1a.
Exposure to greater numbers of community prevention events would be associated with lower perpetration and victimization.
-
1b.
Exposure to greater numbers of community prevention events would be associated with more positive social norms perceptions, lower bystander denial, great proactive and reactive bystander actions.
Method
Research Design and Setting
These data are part of a larger multiple baseline study to evaluate a youth-led sexual violence prevention project (Edwards et al., 2021). Data collection took place over 3 years at five waves: Fall 2017 (W1), Spring 2018 (W2), Fall 2018 (W3), Spring 2019 (W4), and Fall 2019 (W5). The average number of days from W1 was: 180 to W2, 361 to W3, 531 to W4, and 733 to W5. The present paper used data from W1 (baseline) and W5 (after programming had completed). The project was located in the Great Plains region of the U.S. in a small city and included partnerships with several community youth-serving nonprofits and the city school district.
Participants
Participants were 2,539 youth within a single school district. (The original sample was 2,647 youth, but we removed 108 participants [4.1%] for failing attention checks such “Are you over 9 feet tall?” at one or more waves; see Edwards et al., 2021) At W1, they were in Grades 7–10, and the mean age at W1 was 13.7 (SD = 1.2). Including participants from all waves, the sample was 53.2% female participants (n = 1,340) and 46.8% male participants (n = 1,178).1 Participants could identify as more than one race or ethnicity; the majority (76.5%, n = 1,903) identified as White, 21.0% (n = 522) as American Indian or Native American, 5.3% (n = 131) as Black/African American, 3.1% (n = 77) as Asian/Asian American, and 2.3% (n = 58) as Hawaiian/Pacific Islander. Moreover, 12.9% (n = 318) identified as Hispanic/Latino. Regarding sexual orientation, 88.9% (n = 2,141) identified as heterosexual/straight and 11.1% (n = 268) identified as a sexual minority (e.g., bisexual, lesbian, and gay).
Procedure
Written parental consent and student assent were required for youth to complete the survey. We invited all students in Grades 7–10 (n = 4,172) at the beginning of the Fall 2017 semester to enroll in the study; the first survey occurred between October 2017 and December 2017. We used intensive recruitment procedures, such that the consent forms were sent to parents in multiple ways (i.e., via their students from school, mailings, and email) and we called and conducted home visits to households in which consent forms had not been returned. At study initiation, of the 4,172 eligible students, the majority (n = 3,257; 78.0%) of youth returned the consent forms, and of those that returned the forms the majority (n = 2,667; 81.8%) of guardians gave permission for their student to take the survey. Most students (n = 2,232; 83.6%) with guardian permission took the survey. We conducted rolling recruitment after the first survey, so that youth could continue to enroll in the study following the first survey; we used multiple imputation to address missing data (see below for details).
The survey was administered on computers in school by trained research staff. Students received a small incentive (e.g., fruit snack and pencil) and were entered to win one of 20 $100 gift cards which increased by $50 at each of the five subsequent surveys. There was an additional incentive drawing of five large prizes approximately equal to $1,000 (e.g., tablets and pizza party) for completing all surveys for which students were eligible. Overall, retention from W1 to W5 was 58.3% (we estimate that retention was 90.3% when removing students who left the district between W1 and W5). The highly transient nature of the community where data were collected was the largest factor in participant attrition. See Edwards et al. (2021) for more study protocol details, eligibility and participation by wave, as well as detailed participation attrition analysis. In general, younger students and White students were more likely to complete subsequent surveys. In general, male students, students of color, and students who reported some forms of perpetration and victimization were less likely to complete subsequent surveys.
Measures
The measures used in this study were drawn from previous scales, most of which have demonstrated adequate psychometric properties. Citations for each are provided below. Small adaptations (mostly to make wording more clear) were made for the present study based on cognitive interviews with youth prior to survey implementation (Edwards et al., 2021).
Exposure to Community Events
Participation in prevention activities was tracked by research staff at each event and subsequently linked to students’ survey data. A total of 132 community events took place during the time period of the project. The total time for each event was skewed and ranged from 0.5 to 9 hr, though the majority of events were 1–2 hr in length. It should be noted that events that were longer were things like a street art event which took place over a 6-hr window when mural painting took place (or a TedX event which took place over 9 hr) but during which any given individual youth likely participated for only a portion. Duration of events did not necessarily correspond to the amount of prevention content delivered. The nature of this project (community-based, not manualized, combination of skill building and awareness events) meant that the program was not manualized and did not include explicitly exposed dosage hours. Thus, for the present study, we chose to operationalize dose as number of events, as we believe that it more accurately reflects violence prevention initiative exposure, whereas total hours may be misleading. This operationalization of dosage for analysis was approved by federal funder in line with the work and funding expectations. Given that the number of events was zero-inflated and highly skewed, we created three categories of attendance: Attended zero events, attended one event, and attended two or more events.
Interpersonal Violence
We used several measures to assess for a wide range of interpersonal violence victimization and perpetration experiences during the past 6 months, all with response options 1 = yes or 0 = no. We used mirror items to assess for both victimization and perpetration experiences. Three items assessing sexual assault were drawn from Coker et al. (2019) measure that assessed for sexual coercion (e.g., “You had sexual activities with someone because you either threatened to end your friendship or romantic relationship if they didn’t or because you pressured the other person by arguing or begging?”), physically forced sex [e.g., “You had sexual activities with someone by threatening to use or using physical force (twisting their arm, holding them down, etc.)?”], and incapacitated sex (e.g., “You had sexual activities with someone because she or he was drunk or on drugs?”). Five items adapted from the YRBS and other sources were used to assess physically forced sexual contact [e.g., “You forced someone to do sexual things that she or he did not want to do (count such things as kissing, touching, or physically forcing someone to have sexual intercourse)?”], sexual dating violence [e.g., “You forced someone you were dating or going out with to do sexual things that they did not want to do (count such things as kissing, touching, or physically forcing someone to have sexual intercourse)?”], physical dating violence [e.g., “You physically hurt someone you were dating or going out with on purpose (count such things as hitting, slamming into something, or injuring them with an object or weapon)?”], bullying on school property (e.g., “You bullied another person on school property?”), and electronic bullying [i.e., “You bullied another person electronically (count bullying through texting, Instagram, Facebook, or other social media)?”; Centers for Disease Control & Prevention, 2014]. We used three items from the American Association of University Women (2001) to assess for homophobic teasing and sexual harassment [i.e., sexual comments (e.g., “You made sexual comments, jokes, gestures, or looks about/to a person?”) and sexual rumors (e.g., “You spread sexual rumors about a person?”)]. In collaboration with community partners, we also added an item: “You threatened to share or actually shared a private picture of someone else the person did not want shared (including by texting, Instagram, Snapchat, Facebook, or other way of sharing).” Finally, two items assessed homophobic bullying [e.g., “You said a person was gay or a lesbian, as an insult (as a put down or to make fun of someone)?”] and racial bullying (e.g., “You bullied another person or were mean to another person because of the other person’s race/ethnicity/skin color?”). We created outcomes for the present paper for each wave indicating whether a student had experienced sexual victimization, any perpetration, and any victimization (1 = yes; 0 = no) in the past 6 months.
Social Norms for Sexual Violence Prevention
Three items were used to assess youth’s perceptions of injunctive norms related to sexual violence prevention. These items were adapted for this study from earlier work with middle and high school samples (Edwards et al., 2017). The three items included: (a) “My friends think that it is important for adults to talk to students about healthy relationships”; (b) “My friends think that students should show that it is NOT okay to joke or make fun of people’s bodies”; (c) “My friends think that students should talk about how to stop sexual assault (sexual assault is any sexual thing that happens when someone doesn’t want it to happen).” Students responded on a 4-point Likert scale ranging from 1 = strongly disagree to 4 = strongly agree, such that higher scores across all three items reflected more prosocial norm perception about prevention behaviors. Summative scores can be created by adding the three items or by taking the mean. Cronbach’s α for these items was .69/.78 for W1/W5.
Bystander Denial
We used the Denial subscale of the Readiness to Help Scale (D-RHS; Banyard et al., 2014; Edwards et al., 2017) to assess the extent to which students rejected the role that they could play in preventing relationship abuse and sexual assault. The D-RHS consisted of four items (e.g., “There is not much need for me to think about relationship abuse and/or sexual assault among middle and high school students.”). However, one item, “Doing something about relationship abuse and sexual assault is solely the job of the crisis center” was removed for this study given time constraints and youth reporting that the item was confusing during cognitive interviewing. Response options ranged from 1 = strongly disagree to 4 = strongly agree. Items are summed, so that higher numbers are indicative of higher denial of responsibility in situations of relationship abuse and sexual assault. In the present study, the Cronbach’s α was .59/.69 for W1/W5.
Bystander Behavior
We used the work of Banyard and colleagues (Banyard et al., 2014) as a base as well as adaptations of this work for high school samples by Coker and colleagues (Coker et al., 2019), and validity was confirmed in other analyses using this data set (Banyard et al., 2021). We chose the specific bystander situations included by drawing upon recent work on high school students as bystanders to dating violence, sexual harassment and violence, and stalking (Edwards et al., 2017). Four questions asked students about behaviors in which sexual harassment or sexual violence was about to happen or had already happened (referred to as reactive bystander behavior or reactive actionism). The items included: (a) “Saw or heard about a student grabbing or touching another student sexually (like on their butt or breasts)”; (b) “Saw or heard about a student using physical force or alcohol or drugs to make/force another student to have sex”; (c) “Saw or heard about a student sending a naked photo of another student without that person’s permission”; and (d) “Saw or heard about a student spreading sexual rumors about another student.” Two questions asked students about taking proactive steps to prevent sexual assault from happening in the first place (referred to as proactive bystander behavior). One assessed how much students “talked during the past 6 months with their friends or parents, teacher, minister, elder, etc. about things you all could do that might help stop sexual assault,” and the second inquired about the “use of social media (like Facebook, Twitter, etc.) or texting to show that sexual assault is not okay.” For the reactive questions, students were first asked the frequency of times that they had been in a similar situation, a measure of opportunity for each situation. The following answer choices were given: 0 = 0 times, 1 = 1–2 times, 3 = 3–5 times, 6 = 6–9 times, and 10 = 10 or more times. This opportunity variable was not asked in relation to the two prosocial actionist questions, because the nature of prosocial behaviors is such that anyone has the opportunity to do them most of the time.
Reactive Bystander Behavior.
For each of the four reactive behavior items in which participants responded affirmatively with the opportunity to take action, we asked participants how they responded. Participants were presented with the following types of behavior and asked to select all of the things that they did in response to witnessing the experience: (a) “Did nothing/ignored what was happening;” (b) “Laughed, took a video, or showed that you did not think what was happening was a big deal;” (c) “Tried to make the situation stop using distraction, such as dropping something to make a noise; starting a random conversation;” (d) “Get help from another teen, parent, and/or adult;” (e) “Said something or tried to stop the person doing the hurtful behavior;” and (f) “Said something or tried to help or support the person who was being hurt.” Scores were created to represent the degree to which, in each situation, participants acted positively. A point was subtracted from the score if participants acted negatively (behaviors a and b) and added to the score if participants acted positively (behaviors c, d, e, and f). Thus, final scores for each situation ranged from −2 to 4 (Banyard et al., 2021). Analyses of these outcomes were performed on smaller subsamples of participants who indicated opportunity and thus findings may be less stable.
Proactive Bystander Behavior.
The two proactive bystander behavior questions used the following response options for each of the items: 0 = 0 times, 1 = 1–2 times, 3 = 3–5 times, 6 = 6–9 times, or 10 = 10 or more times. We used a mean of these items. The Cronbach’s α for these two items was .65/.69 for W1/W5.
Analysis Plan
All analyses were estimated in Stata 15, using the MI IMPUTE, MI ESTIMATE, LOGISTIC, and REGRESS commands. In all analyses, we used multiple imputation (MI; 20 imputed data sets) to address attrition and other missing data, which assume that data are missing at random (Lang & Little, 2018). Recently, methodologists have called for principle missing data treatments in prevention research, advising against use of listwise deletion (Lang & Little, 2018), and this treatment of missing data is considered one part of sensitivity analyses (Thabane et al., 2013). Although completely random missingness is rare in longitudinal studies, we controlled for variables that we found to be associated with missingness across waves, such as sex and racial identity. To address Aim 1a, we conducted a series of logistic regression analyses, one for each of the dichotomous W5 outcomes [i.e., sexual victimization, overall perpetration, overall victimization, and dependent variables (DV)]. The independent variables (IVs; predictors) in each model were attended one event (0 = no; 1 = yes) and attended two or more events (0 = no; 1 = yes), with attended zero events as the reference category. Covariates were sex (1 = male), ethnicity (1 = Hispanic/Latinx), race (1 = White), sexual orientation (1 = sexual minority), age (higher = older), school (dummy coded with the largest school as the comparison group), the W1 outcome scores (as applicable to each model), and retreat attendance. To address Aim 1b, we conducted a series of linear regression analyses. Each intermediate outcome (i.e., social norms, denial, proactive bystander behavior, reactive bystander behavior—touch, reactive bystander behavior—sexual violence, reactive bystander behavior—share, and reactive bystander behavior—rumors) was the DV in a separate model. The IVs and covariates were identical to Aim 1a analyses.
Below, we present unstandardized betas in text. Thus, for measures of effect sizes, for significant main effects for dichotomous variables (e.g., perpetration and victimization), we present the odds ratio as a measure of effect size. For significant main effects for continuous variables, we calculated unadjusted Cohen’s d using the descriptive statistics from the imputed data sets. In addition, as recommended by Goodman and Berlin (1994), we report the 95% confidence intervals for analyses. Further sensitivity analyses were provided by subjecting the data to a second set of analyses using propensity score matching.
In regard to sample size, the present study was a community-based effectiveness trial. We ensured a large sample by choosing an appropriately large school district; however, our sample size was greatly impacted by uncontrollable factors such as how many parents gave permission, how many students moved away from the district, and absenteeism on survey days. After conducting analyses, we conducted a minimum detectable effect analysis (Juras et al., 2016; Mair et al., 2020). These analyses indicated that the study was appropriately powered for the three main outcomes presented in the current analyses (overall peer violence perpetration, overall victimization, and sexual victimization). The data used in the current analyses are not publicly available.
Results
Baseline Differences Between Groups
The present paper examined baseline differences between participants who attended no events, one event, and two or more events using a series of t-tests and chi-square tests (Table 1). Few differences were statistically significant at the level of p < .05. At baseline, participants who attended more events were less likely to be male, χ2(2) = 17.48, p = .000, were younger, F(2) = 4.77, p = .000, reported more positive social norms, F(2) = 3.53, p = .029, and reported more proactive bystander behavior, F(2) = 3.03, p = .049, than participants who attended fewer events. Groups were not significantly different from ethnicity, race, sexual minority status, sexual perpetration and victimization, overall perpetration and victimization, and reactive bystander behavior.
Table 1.
Baseline Group Differences by Program Dosage
No events (n = 2,300) | One event (n = 147) | Two or more events (n = 62) | ||
---|---|---|---|---|
Variable | % (n) | % (n) | % (n) | p value |
Male | 47.99% (1,108) | 36.05% (53) | 27.42% (17) | <.01 |
Hispanic/Latinx | 12.95% (293) | 11.03% (16) | 14.75% (9) | .73 |
Sexual minority | 10.88% (240) | 11.11% (16) | 20.00% (12) | .09 |
White | 76.08% (1,733) | 83.67% (123) | 75.81% (47) | .11 |
Sexual perpetration | 3.33% (63) | 4.03% (5) | 1.85% (1) | .76 |
Sexual victimization | 7.03% (133) | 5.60% (7) | 5.56% (3) | .77 |
Overall perpetration | 28.58% (349) | 34.23% (38) | 8.16% (4) | .19 |
Overall victimization | 42.21% (515) | 45.54% (51) | 42.86% (21) | .46 |
No events | 1 event | 2 or more events | ||
M (SD) | M (SD) | M (SD) | p value | |
Age | 13.76 (1.24) | 13.52 (1.19) | 13.35 (.95) | <.01 |
Social norms | 2.99 (.62) | 3.15 (.59) | 3.02 (.65) | .03 |
Denial | 2.10 (.61) | 2.05 (.67) | 2.03 (.59) | .51 |
Proactive bystander behavior | 0.54 (1.46) | 0.88 (1.82) | 0.67 (1.69) | .05 |
Reactive bystander behavior—touch | 0.37 (0.96) | 0.50 (1.06) | 0.53 (1.02) | .13 |
Reactive bystander behavior—sexual violence | 0.12 (0.61) | 0.18 (0.74) | 0.10 (0.47) | .49 |
Reactive bystander behavior—share | 0.15 (0.78) | 0.17 (0.85) | 0.23 (0.86) | .71 |
Reactive bystander behavior—rumors | 0.31 (0.98) | 0.35 (1.10) | 0.58 (1.21) | .11 |
Note. SD = standard deviation. Percentages are valid and raw/unadjusted. Statistically significant differences are bolded.
Aim 1a
Overall, 2,330 participants reported that they had not attended any community events (91.77%), 147 (5.79%) attended one event, and 62 (2.44%) attended two or more events. Participants who attended two or more events compared to none were less likely to report subsequent overall perpetration, odds ratio (OR) = 0.14, p < .05, a small effect, than nonattendees. We found a significant p value for overall perpetration at p = .002. At W5, 28.58% of the no events group reported perpetration, 34.23% of the one event group report perpetration, and 8.16% of the two or more event group reported overall perpetration. The standard error was low and the CIs were narrow, which indicates a well-powered, stable model. The boundaries of the 95% CI are in range of what could be considered clinically meaningful, given that the values are in line with reported impacts of previously researched evidence-based programs (e.g., Taylor et al., 2013). Because overall perpetration was comprised of several types of interpersonal violence perpetration, we conducted post-hoc analyses to understand whether a certain type of violence perpetration was driving this finding. We found evidence that this effect was largely driven by sexual harassment perpetration (OR = 0.14; p = .01). We did not find evidence of significant associations between community event attendance and sexual perpetration and victimization, or overall victimization (Table 2); standard errors were higher for these errors that that of overall perpetration. For sexual victimization, the unadjusted W5 rates for no event, one event, and two or more events were 7.81%, 9.91%, and 8.33%, respectively. For overall victimization, the unadjusted W5 rates for no event, one event, and two or more events were 42.21, 45.54, and 42.86%, respectively. We did not see associations for participants who attended one event compared to none.
Table 2.
Aim 1a: Effect of Community Program Dosage on Primary Outcomes (N = 2,539)
Variable | Sexual victimization | ||||
---|---|---|---|---|---|
OR | SE | p | 95% CI | ||
Outcome at T1 | 3.768 | 1.261 | .000 | 1.923 | 7.382 |
Male | 0.265 | 0.066 | .000 | 0.162 | 0.432 |
Hispanic/Latinx | 1.861 | 0.557 | .042 | 1.024 | 3.382 |
Sexual minority | 1.611 | 0.419 | .069 | 0.963 | 2.694 |
White | 1.617 | 0.579 | .186 | 0.786 | 3.326 |
Age | 0.887 | 0.141 | .451 | 0.646 | 1.217 |
1 event | 1.039 | 0.353 | .910 | 0.533 | 2.025 |
2 or more events | 0.476 | 0.296 | .234 | 0.140 | 1.622 |
Overall perpetration | |||||
Outcome at T1 | 3.788 | 0.542 | .000 | 2.848 | 5.037 |
Male | 2.274 | 0.270 | .000 | 1.799 | 2.874 |
Hispanic/Latinx | 1.020 | 0.212 | .924 | 0.674 | 1.544 |
Sexual minority | 2.272 | 0.475 | .000 | 1.498 | 3.446 |
White | 1.646 | 0.292 | .006 | 1.157 | 2.341 |
Age | 0.950 | 0.099 | .620 | 0.771 | 1.169 |
1 event | 1.202 | 0.265 | .404 | 0.780 | 1.852 |
2 or more events | 0.137 | 0.085 | .002 | 0.040 | 0.465 |
Overall victimization | |||||
Outcome at T1 | 3.866 | 0.550 | .000 | 2.906 | 5.143 |
Male | 0.892 | 0.099 | .308 | 0.716 | 1.112 |
Hispanic/Latinx | 1.120 | 0.201 | .528 | 0.785 | 1.600 |
Sexual minority | 2.155 | 0.471 | .001 | 1.390 | 3.340 |
White | 1.273 | 0.192 | .114 | 0.943 | 1.717 |
Age | 0.994 | 0.078 | .943 | 0.851 | 1.162 |
1 event | 1.122 | 0.241 | .592 | 0.735 | 1.713 |
2 or more events | 0.764 | 0.285 | .472 | 0.367 | 1.593 |
Note. All analysis included school and retreat as covariates; to preserve space we did not include these estimates. OR = odds ratio; CI = confidence interval. Statistically significant effects are bolded.
Aim 1b
Participants who attended two or more events compared to none reported less subsequent denial (b = −.22; d = .27), a small-to-medium effect, and more subsequent proactive bystander behavior (b = .47; d = .16), a small effect, than nonattendees. Participants who attended two or more events reported more subsequent reactive bystander behavior—touch (b = .33; d = .19), a small effect, than nonattendees. Participants who attended two or more events reported more subsequent reactive bystander behavior—sexual violence (b = .25; d = .20), a small effect, than nonattendees. Participants who attended one event reported more subsequent reactive bystander behavior—rumors (b = .19; d = .13), a small effect, than nonattendees. We did not find evidence of associations between community event attendance with subsequent social norms or reactive bystander behavior—share (Table 3). Inspection of confidence intervals shows that reactive bystander outcomes contain a range that might all be considered clinically meaningful (suggesting that low power would not affect the findings). We explored results for Aims 1a and 1b with moderation by sex, race, sexual orientation, and age. We did not find that any of these social identity factors consistently moderated the results.
Table 3.
Aim 1b: Effect of Community Program Dosage on Intermediary Outcomes (N = 2,539)
Variable | Social norms | Denial | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
b | SE | p | 95% CI | b | SE | p | 95% CI | |||
Outcome at T1 | 0.212 | 0.029 | .000 | 0.155 | 0.269 | 0.275 | 0.027 | .000 | 0.222 | 0.328 |
Male | −0.218 | 0.034 | .000 | −0.286 | −0.150 | 0.252 | 0.033 | .000 | 0.185 | 0.318 |
Hispanic/Latinx | −0.052 | 0.068 | .456 | −0.191 | 0.087 | 0.019 | 0.049 | .695 | −0.079 | 0.118 |
Sexual minority | 0.051 | 0.058 | .384 | −0.065 | 0.167 | −0.069 | 0.059 | .247 | −0.187 | 0.049 |
White | −0.053 | 0.050 | .300 | −0.154 | 0.048 | 0.050 | 0.045 | .264 | −0.039 | 0.140 |
Age | −0.021 | 0.027 | .443 | −0.074 | 0.033 | −0.033 | 0.025 | .204 | −0.083 | 0.018 |
1 event | 0.100 | 0.059 | .092 | −0.016 | 0.216 | 0.018 | 0.061 | .767 | −0.103 | 0.140 |
2 or more events | 0.013 | 0.104 | .897 | −0.192 | 0.219 | −0.219 | 0.098 | .027 | −0.412 | −0.026 |
Proactive bystander behavior | Reactive bystander behavior—touch | |||||||||
Outcome at T1 | 0.189 | 0.026 | .000 | 0.138 | 0.240 | 0.108 | 0.017 | .000 | 0.075 | 0.142 |
Male | −0.194 | 0.077 | .014 | −0.347 | −0.040 | −0.154 | 0.034 | .000 | −0.220 | −0.087 |
Hispanic/Latinx | −0.311 | 0.118 | .011 | −0.547 | −0.076 | 0.002 | 0.051 | .969 | −0.097 | 0.101 |
Sexual minority | 0.252 | 0.131 | .060 | −0.011 | 0.516 | −0.002 | 0.056 | .979 | −0.112 | 0.109 |
White | −0.165 | 0.087 | .061 | −0.337 | 0.008 | 0.074 | 0.042 | .079 | −0.009 | 0.157 |
Age | 0.105 | 0.054 | .055 | −0.002 | 0.213 | 0.025 | 0.024 | .293 | −0.022 | 0.072 |
1 event | −0.053 | 0.132 | .689 | −0.313 | 0.207 | 0.078 | 0.072 | .281 | −0.064 | 0.219 |
2 or more events | 0.469 | 0.217 | .032 | 0.040 | 0.897 | 0.330 | 0.119 | .006 | 0.096 | 0.564 |
Reactive bystander behavior—sexual violence | Reactive bystander behavior—share | |||||||||
Outcome at T1 | 0.052 | 0.017 | .002 | 0.019 | 0.084 | 0.048 | 0.015 | .001 | 0.019 | 0.076 |
Male | −0.068 | 0.021 | .001 | −0.109 | −0.026 | −0.027 | 0.024 | .250 | −0.073 | 0.019 |
Hispanic/Latinx | 0.068 | 0.031 | .030 | 0.007 | 0.130 | 0.038 | 0.035 | .277 | −0.031 | 0.107 |
Sexual minority | −0.071 | 0.033 | .032 | −0.136 | −0.006 | −0.013 | 0.038 | .724 | −0.088 | 0.061 |
White | 0.023 | 0.026 | .382 | −0.028 | 0.074 | 0.002 | 0.029 | .945 | −0.056 | 0.060 |
Age | 0.017 | 0.015 | .271 | −0.013 | 0.047 | −0.007 | 0.017 | .698 | −0.040 | 0.027 |
1 event | 0.070 | 0.045 | .114 | −0.017 | 0.158 | 0.091 | 0.050 | .068 | −0.007 | 0.190 |
2 or more events | 0.250 | 0.074 | .001 | 0.105 | 0.395 | −0.024 | 0.083 | .770 | −0.187 | 0.139 |
Reactive bystander behavior—rumors | ||||||||||
Outcome at T1 | 0.086 | 0.015 | .000 | 0.056 | 0.115 | |||||
Male | −0.140 | 0.030 | .000 | −0.200 | −0.080 | |||||
Hispanic/Latinx | 0.046 | 0.045 | .306 | −0.042 | 0.135 | |||||
Sexual minority | −0.103 | 0.048 | .033 | −0.198 | −0.008 | |||||
White | 0.015 | 0.038 | .700 | −0.060 | 0.089 | |||||
Age | 0.004 | 0.021 | .869 | −0.038 | 0.045 | |||||
1 event | 0.194 | 0.065 | .003 | 0.067 | 0.320 | |||||
2 or more events | −0.131 | 0.107 | .222 | −0.341 | 0.079 |
Note. All analysis included school and retreat as covariates; to preserve space we did not include those estimates. OR = odds ratio; CI = confidence interval. Statistically significant effects are bolded.
Sensitivity Analysis
To further account for selection into program participation based on observed characteristics, these regressions were re-estimated with the addition of inverse probability of treatment weights (see Appendix; Guo & Fraser, 2014). This method is a matching method that includes a function of the propensity score (e.g., the similarity of someone who participated in the program with someone who did not) as an additional covariate in the model. This method weights attendees who are similar to nonattendees more than attendees who are dissimilar to nonattendees. The weights included all covariates and all lagged measures of the outcomes. This method is doubly robust, as it uses both the control variables and the treatment weights in the model. The results were largely consistent with the findings reported above, except for the touch and sexual violence situations of the reactive bystander behavior measure. Given that participants only received reactive behavior scores if they reported opportunity to help, analyses for these outcomes were performed on a smaller subset of the sample and may have had diminished power to detect effects. Overall, the estimates from the regression that included the inverse probability of treatment weights provide greater confidence that changes in perpetration, denial, and proactive bystander behavior were not due to selection into program participation.
Discussion
Overall, the current findings suggest that a youth-led community-based prevention initiative had positive effects on some outcomes, but only among students who attended two or more community-based prevention events (with one exception). These findings are consistent with previous research that indicates the need for booster sessions and multiple exposures to prevention content to produce effects (Banyard et al., 2018; Finkelhor et al., 2014; Nation et al., 2003). Community event topics were similar across time (including bystander action, health and wellness from social emotional learning, and social norms related to peer violence) with some activities recurring (e.g., game night). Findings from the present study underscore that multiple interactions lead to greater success in violence prevention. Successful implementation may require resources to promote multiple program exposures over time as one-time programming is not sufficient for preventing complex problems, such as PV.
The present study extends these findings to community-based prevention with youth. Having events in a community can be an effective way to diffuse peer sexual violence prevention messages and enhance the effects of other training programs. Our findings show relatively lower levels of overall perpetration among those who participated in two or more events, a study strength given that PV perpetration occurs in low base rate numbers in time frames like past 6 months and thus is difficult to show outcomes. The nonsignificant effects for sexual and overall victimization may be due to the low base rate of these events. It is also possible that youth-led initiatives are not sufficient to move the needle on these behaviors, or that attaining significant reach to see these effects may take more time than the current project allotted (Coker et al., 2019; Proctor et al., 2011). Further research is needed with a longer follow-up period and with larger samples and perhaps also older groups of students to examine whether a youth-led program such as Youth VIP impacts peer sexual violence perpetration.
Limitations
There were a number of study limitations that should be considered when interpreting the findings. We used and adapted the best measures available for assessing prevention outcomes, but this is a field still very much in development. For example, bystander intervention, in particular, remains a complex construct to measure particularly given variability in opportunities to help. Positive social norms have also been less well-studied and future studies using longer measures that might achieve greater reliability are needed. The bullying measure used the term “bullying” rather than behavioral descriptions which might have suppressed responses, although these are standardized items used in the Youth Risk Behavioral Surveillance Survey (Centers for Disease Control and Prevention, 2019). In addition, some of the measures used in the present study, while widely used in other studies, have not undergone rigorous psychometric evaluation and produced low Cronbach’s alphas.
Although steps were taken to minimize the effects of selection, it remains possible that some unmeasured factors influenced both the likelihood of participating and the outcomes. Causal effects of exposure cannot be assumed. Importantly, only a small number of youth reported sexual perpetration, victimization, and overall perpetration. This is a common problem in evaluation of violence prevention programs. While it is critical to assess victimization and perpetration as outcomes, their low base rates over the time periods used for evaluation (especially if programs actually are successful at reducing violence) makes detecting significance challenging. For this project, the challenges of low base rate sexual violence victimization and perpetration behavior are compounded by focusing on youth, including middle schoolers (who have fewer sexual violence experiences), and by utilizing a youth leader dissemination intervention. Regardless, limited power issues are more likely to reflect null results. In addition, although we used a modern missing data method, the reactive bystander outcome variables were based only on students who had the opportunity to help, reducing the power to detect effects for this outcome.
Overall, a small sample of youth attended more than one event. This skew in the data meant that we could not do a more thorough dose–response analysis (with say three, four, etc. sessions). While results are promising and consistent with program effects based on program dose, these results should be interpreted with caution given the complexity of low multiple program event attendance coupled with a low base rate behavior (sexual violence). The present study operationalized dose as number of events attended, which is a limited metric. Finally, this project took place in the Northern Great Plains region of the U.S. and the findings need to be replicated in more urban and international contexts.
Future Research Directions
Future work should consider what significant gains accrue from increased dosages (e.g., three or more) and if these gains are worth the increased resources required. Researchers need to follow up with more investigation of the resources needed and sustainability of multiple dose exposure. Unpacking different types of dose (boosters through other programs or through social media or other mechanisms) is also important to understand better. Other important implementation variables include who delivers the sessions, understanding and dismantling core components, and understanding more about barriers to youth participation. Indeed, as Nation et al. (2003) discuss, more dose of a program that is limited by not being theoretically driven, comprehensive, and well-timed is unlikely to have benefits. For example, a process evaluation of the current program revealed that some youth found the material to be aimed at a younger age group than they were expecting and this reduced their positive sense of the program (Waterman et al., 2021). Prevention researchers can assist communities by identifying the innovations that are most effective in improving youth engagement. For example, in the present study, we began the project with key informant interviews and asked youth questions about their engagement on surveys as part of a process evaluation (Waterman et al., 2021). Communities seeking to implement this type of prevention should begin with a similar data gathering process. Furthermore, while in the present study, we used different methods, including internship stipends, social media outreach, food/meals at events, inclusion of fun activities (e.g., Titan Games), encouraging youth to bring friends with them, and varied the timing of events (evenings, afterschool during the week, and weekends), we did not have the data to empirically examine what impact each of these strategies had on participation. Tracking such data could help communities identify which might be most successful.
Prevention, Clinical and Policy Implications
Youth engagement in community-based settings remains complex as the adequacy of available local resources (e.g., funding, transportation, safe spaces to meet, dedicated adults, and program staff) and youth’s commitment (e.g., attending multiple sessions) may remain significant barriers to implementation and sustainability. Indeed, there may be particular challenges to getting teens to come to programs in the community rather than in the classroom. We know that youth have many competing demands on their time, including work, academics, sports, and other school-based clubs (Hutchison et al., 2021). Communities considering implementing these events may benefit from partnering with schools to include community activities as part of school-affiliated events, such as sports and offering events on weekends when they may be less likely to compete with after-school activities and jobs and recruiting youth in friend groups (Hutchison et al., 2021). In the present study, we had some success by making some events more family oriented to engage younger siblings and parents as well as adolescents, partnering prevention activities with other entertaining recreational activities, such as movies, giving out prevention messaged swag, and including a meal for youth and their families. Events were held as stand-alone events and as part of larger community celebrations (e.g., community parade). Cross-sectoral collaboration among adult leaders included, but was not limited to: Religious leaders, law enforcement officers, community youth program leaders, and small business leaders. Finally, finding creative ways to engage youth on social media, especially during times, such as the current pandemic, is a critical consideration for effective prevention for youth. Understanding the different engagement tools and the variety needed to keep continued youth participation and attendance are critical to more fully sustaining programs, to enacting prevention outside of schools, and understanding program dose associations for sexual violence prevention.
Implementation science benefits from a more detailed analysis of whether and how multiple program dosages can lead to improved violence prevention outcomes, such as decreases in peer violence perpetration and increases in proactive bystander behavior. While dosage-based programming may add burden in terms of resources and youth commitment, the present study adds support to the value of increasing youth exposure to prevention messaging over time. Youth VIP represents a promising youth-led sexual violence prevention intervention that can be replicated in other communities. Future research should consider economic cost–benefit analyses, effects of three or more doses, and strategies to successfully engage communities and youth in sustainable youth-led violence prevention efforts.
Acknowledgments
Funding for this study was provided by the U.S. Centers for Disease Control and Prevention (CDC) National Center for Injury Prevention and Control, Grant #U01-CEO02838. We owe a great deal of gratitude to our school and community partners and project staff. Without these individuals, this project would not have been possible. Thanks to Alex Haralampoudis for statistical consultation. We have no conflicts of interests to disclose.
Appendix
Propensity Score Matching Linear Probability Models
Dose | Overall perp | Overall vic | SV perp | SV vic | Soc norms | Denial | Proact | Touch | SV | Photo | Rumors |
---|---|---|---|---|---|---|---|---|---|---|---|
Propensity score matcha | |||||||||||
1 event | .09 | −.01 | .01 | −.01 | .11 | .04 | .01 | .08 | .13 | .13 | .29* |
2 or more | −.23*** | −.02 | .01 | −.00 | −.16 | −.29** | .82+ | .08 | .04 | .06 | −.22** |
Effect size | .51 | .48 | .62 |
Note. SV = sexual violence.
Controlling for all lagged dependent variables (except sexual violence perpetration/victimization), gender, sexual minority status, race/ethnicity, age, and school.
p < .10.
p < .05.
p < .01.
p < .001.
Footnotes
The findings and conclusions in this manuscript are those of the authors and do not necessarily represent the official position of the CDC.
We present valid percentages; because some students selected “I decline to answer,” numbers do not necessarily add to the total N. Count represent participants’ identity at the first wave they took the survey; for example, if a participant identified as male at Wave 1 and female at Wave 3, that participant was counted as male here (at each wave, approximately 0.3% of youth reported a different sex than they did previously).
References
- American Association of University Women. (2001). Hostile hallways: Bullying, teasing, and sexual harassment in school. https://www.ccasa.org/wp-content/uploads/2014/01/AAUW-Hostile-hallways-report.pdf [DOI] [PubMed]
- Banyard V, Potter SJ, Cares AC, Williams LM, Moynihan MM, & Stapleton JG (2018). Multiple sexual violence prevention tools: Doses and boosters. Journal of Aggression, Conflict and Peace Research, 10(2), 145–155. 10.1108/JACPR-05-2017-0287 [DOI] [Google Scholar]
- Banyard V, Waterman EA, Edwards KM, & Valente TW (2021). Adolescent peers and prevention: Network patterns of sexual violence attitudes and bystander actions. Journal of Interpersonal Violence, 3(73), Article 886260521997448. 10.1177/0886260521997448 [DOI] [PubMed] [Google Scholar]
- Banyard VL, Moynihan MM, Cares AC, & Warner R (2014). How do we know if it works? Measuring outcomes in bystander-focused abuse prevention on campuses. Psychology of Violence, 4(1), 101–115. 10.1037/a0033470 [DOI] [Google Scholar]
- Banyard VL, Moynihan MM, & Plante EG (2007). Sexual violence prevention through bystander education: An experimental evaluation. Journal of Community Psychology, 35(4), 463–481. 10.1002/jcop.20159 [DOI] [Google Scholar]
- Bundy A, McWhirter PT, & McWhirter JJ (2011). Anger and violence prevention: Enhancing treatment effects through booster sessions. Education & Treatment of Children, 34, 1–14. 10.1353/etc.2011.0001 [DOI] [Google Scholar]
- Centers for Disease Control and Prevention. (2014). Youth risk behavior surveillance—United States, 2013 (Surveillance Summaries, Issue). https://www.cdc.gov/mmwr/preview/mmwrhtml/ss6304a1.htm
- Centers for Disease Control and Prevention. (2019). High school youth risk behavior survey data. http://nccd.cdc.gov/youthonline/
- Coker AL, Bush HM, Brancato CJ, Clear ER, & Recktenwald EA (2019). Bystander program effectiveness to reduce violence acceptance: RCT in high schools. Journal of Family Violence, 34(3), 153–164. 10.1007/s10896-018-9961-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Davidov D, Bush HM, Clear ER, & Coker AL (2019). Using a multiphase mixed methods triangulation design to measure bystander intervention components and dose of violence prevention programs on college campuses. Journal of Family Violence, 35(6), 1–12. 10.1007/s10896-019-00108-5 [DOI] [Google Scholar]
- Dusenbury L, Brannigan R, Falco M, & Hansen WB (2003). A review of research on fidelity of implementation: Implications for drug abuse prevention in school settings. Health Education Research, 18(2), 237–256. 10.1093/her/18.2.237 [DOI] [PubMed] [Google Scholar]
- Edwards KM, Banyard V, Waterman EA, Mitchell KJ, Jones LM, & Mercer Kollar LM (2021). Evaluating the impact of a youth-led interpersonal violence prevention program. Youth Leadership Retreat Outcomes [Manuscript under review]. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Edwards KM, Banyard VL, Sessarego SN, Stanley LR, Mitchell KJ, Eckstein RP, Rodenhizer KAE, & Leyva CP (2017). Measurement tools to assess relationship abuse and sexual assault prevention program effectiveness among youth. Psychology of Violence. Advance online publication. 10.1037/vio0000151 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Edwards KM, Banyard VL, Sessarego SN, Waterman EA, Mitchell KJ, & Chang H (2019). Evaluation of a bystander-focused interpersonal violence prevention program with high school students. Prevention Science, 20(4), 488–498. 10.1007/s11121-019-01000-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- Edwards KM, Jones LM, Mitchell KJ, Hagler MA, & Roberts LT (2016). Building on youth’s strengths: A call to include adolescents in developing, implementing, and evaluating violence prevention programs. Psychology of Violence, 6(1), 15–21. 10.1037/vio0000022 [DOI] [Google Scholar]
- Finkelhor D, Vanderminden J, Turner H, Shattuck A, & Hamby S (2014). Youth exposure to violence prevention programs in a national sample. Child Abuse and Neglect: The International Journal, 38(4), 677–686. 10.1016/j.chiabu.2014.01.010 [DOI] [PubMed] [Google Scholar]
- Foshee VA, Bauman KE, Ennett ST, Suchindran C, Benefield T, & Linder GF (2005). Assessing the effects of the dating violence prevention program “safe dates” using random coefficient regression modeling. Prevention Science, 6(3), 245–258. 10.1007/s11121-005-0007-0 [DOI] [PubMed] [Google Scholar]
- Goodman SN, & Berlin JA (1994). The use of predicted confidence intervals when planning experiments and the misuse of power when interpreting results. Annals of Internal Medicine, 121(3), 200–206. 10.7326/0003-4819-121-3-199408010-00008 [DOI] [PubMed] [Google Scholar]
- Guo S, & Fraser MW (2014). Propensity score analysis: Statistical methods and applications (Vol. 11). Sage Publications [Google Scholar]
- Hutchison C, Waterman EA; Edwards K; Banyard V; Hopfauf S; Simon B (2021). Attendance at a community-based, after school. Youth-Led Sexual Violence Prevention Initiative [Manuscript submitted for publication]. [DOI] [PubMed] [Google Scholar]
- Johnson JE, Wiltsey-Stirman S, Sikorskii A, Miller T, King A, Blume JL, Pham X, Moore Simas TA, Poleshuck E, Weinberg R, & Zlotnick C (2018). Protocol for the ROSE sustainment (ROSES) study, a sequential multiple assignment randomized trial to determine the minimum necessary intervention to maintain a postpartum depression prevention program in prenatal clinics serving low-income women. Implementation Science, 13(1), Article 115. 10.1186/s13012-018-0807-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Juras R, Comfort A, & Bein E (2016). How study design influences statistical power in community-level evaluations. Office of adolescent health evaluation technical assistance brief (Vol. 3). Health and Human Services Office of Adolescent Health. [Google Scholar]
- Lang KM, & Little TD (2018). Principled missing data treatments. Prevention Science, 19(3), 284–294. 10.1007/s11121-016-0644-5 [DOI] [PubMed] [Google Scholar]
- Mair MM, Kattwinkel M, Jakoby O, & Hartig F (2020). The minimum detectable difference (MDD) concept for establishing trust in nonsignificant results: A critical review. Environmental Toxicology and Chemistry, 39(11), 2109–2123. 10.1002/etc.4847 [DOI] [PubMed] [Google Scholar]
- Mihalic SF, & Irwin K (2003). Blueprints for violence prevention: From research to real-world settings—factors influencing the successful replication of model programs. Youth Violence and Juvenile Justice, 1(4), 307–329. 10.1177/1541204003255841 [DOI] [Google Scholar]
- Miller E, Tancredi DJ, McCauley HL, Decker MR, Virata MCD, Anderson HA, O’Connor B, & Silverman JG (2013). One-year follow-up of a coach-delivered dating violence prevention program: A cluster randomized controlled trial. American Journal of Preventive Medicine, 45(1), 108–112. 10.1016/j.amepre.2013.03.007 [DOI] [PubMed] [Google Scholar]
- Nation M, Crusto C, Wandersman A, Kumpfer KL, Seybolt D, Morrissey-Kane E, & Davino K (2003). What works in prevention: Principles of effective prevention programs. American Psychologist, 58(6–7), 449–456. 10.1037/0003-066X.58.6-7.449 [DOI] [PubMed] [Google Scholar]
- Niolon PH, Vivolo-Kantor AM, Tracy AJ, Latzman NE, Little TD, DeGue S, Lang KM, Estefan LF, Ghazarian SR, McIntosh WLKW, Taylor B, Johnson LL, Kuoh H, Burton T, Fortson B, Mumford EA, Nelson SC, Joseph H, Valle LA, & Tharp AT (2019). An RCT of dating matters: Effects on teen dating violence and relationship behaviors. American Journal of Preventive Medicine, 57(1), 13–23. 10.1016/j.amepre.2019.02.022 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Proctor E, Luke D, Calhoun A, McMillen C, Brownson R, McCrary S, & Padek M (2015). Sustainability of evidence-based healthcare: Research agenda, methodological advances, and infrastructure support. Implementation Science, 10(1), Article 88. 10.1186/s13012-015-0274-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, & Hensley M (2011). Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health, 38(2), 65–76. 10.1007/s10488-010-0319-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rowbotham S, Conte K, & Hawe P (2019). Variation in the operationalisation of dose in implementation of health promotion interventions: Insights and recommendations from a scoping review. Implementation Science, 14(1), Article 56. 10.1186/s13012-019-0899-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Taylor BG, Stein ND, Mumford EA, & Woods D (2013). Shifting boundaries: An experimental evaluation of a dating violence prevention program in middle schools. Prevention Science, 14(1), 64–76. 10.1007/s11121-012-0293-2 [DOI] [PubMed] [Google Scholar]
- Thabane L, Mbuagbaw L, Zhang S, Samaan Z, Marcucci M, Ye C, Thabane M, Giangregorio L, Dennis B, Kosa D, Debono VB, Dillenburg R, Fruci V, Bawor M, Lee J, Wells G, & Goldsmith CH (2013). A tutorial on sensitivity analyses in clinical trials: The what, why, when and how. BMC Medical Research Methodology, 13(1), Article 92. 10.1186/1471-2288-13-92 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wandersman A, Duffy J, Flaspohler P, Noonan R, Lubell K, Stillman L, Blachman M, Dunville R, & Saul J (2008). Bridging the gap between prevention research and practice: The interactive systems framework for dissemination and implementation. American Journal of Community Psychology, 41(3–4), 171–181. 10.1007/s10464-008-9174-z [DOI] [PubMed] [Google Scholar]
- Waterman E, Hutchison C, Edwards K, Hopfauf S, Simon B, & Banyard V (2021). A process evaluation of a youth-led sexual violence prevention initiative. Journal of Prevention and Health Promotion. Advance online publication. 10.1177/26320770211010817 [DOI] [Google Scholar]