Skip to main content
PLOS One logoLink to PLOS One
. 2024 Mar 28;19(3):e0301023. doi: 10.1371/journal.pone.0301023

Focused deterrence: A protocol for a realist multisite randomised controlled trial for evaluating a violence prevention intervention in the UK

Tia Simanovic 1,#, Paul McFarlane 2,*,#, Iain Brennan 1,#, William Graham 3, Alex Sutherland 4,#
Editor: Nickolas Zaller5
PMCID: PMC10977732  PMID: 38547103

Abstract

Introduction

Focused deterrence (FD) is a frequently cited intervention for preventing violence, particularly against violent urban gangs. The Youth Endowment Fund (YEF) believes it could be effective in the UK, based primarily on research conducted in the US. However, we contend that these studies have inadequate methodological designs, lack of rigorous testing, and small sample sizes. Therefore, the evidence supporting focused deterrence as an effective method, particularly outside the US, is inconclusive. The aim of the protocol is to better understand the potential effects of FD in the context of the UK, using a multisite evaluation experimental design to more closely investigate the evidence of its likely impact.

Methods

We planned a realist randomised controlled trial. The design is focused on a multisite trial consisting of two-arm randomised experiments in five locations. Each trial location will test their implementation of a core programme specified by the funder. The multisite nature will allow us to understand differential impacts between locations, improving the external validity of the results. Participants will be randomly selected from a wider pool of eligible individuals for the intervention. We estimate a sample size of approximately N = 1,700 individuals is required. Based on this pooled sample size, a relative reduction of 26% would be detectable in 80% of trials. The trial is coupled with a formative process evaluation of delivery and fidelity. The formative evaluation will use a mixed methods design. The qualitative aspect will include semi-structured cross-sectional and longitudinal interviews with programme leads, programme delivery team, and programme participants, as well as observations of the meetings between the programme delivery team (i.e., community navigators/mentors) and programme participants. The quantitative data for the formative evaluation will be gathered by the sites themselves and consist of routine outcome performance monitoring using administrative data. Sampling for interviews and observations will vary, with the researchers aiming for a higher number of individuals included in the first round of cross-sectional interviews and retaining as many as possible for repeat interviews and observations.

Discussion

This protocol outlines the process and impact evaluation methodology for the most extensive multisite evaluation of focused deterrence to date in the UK. Spanning five distinct sites with seven trials, the evaluation includes a cohort of 2,000 individuals, marking it as the only multisite trial of focused deterrence. Employing an integrated realist evaluation framework, the study uses qualitative and quantitative research methods. The anticipated findings will offer pivotal insights for formulating future violence prevention policies in the UK. They are also expected to contribute significantly to the corpus of literature on violence prevention and intervention evaluation.

Trial registration

Protocol registration: ISRCTN: 11650008 4th June 2023.

Introduction

Violence is a leading cause of injury and mortality among young people worldwide [1]. It is associated with physical, psychological [2] and social harms [3] to victims, and potentially harmful contact with the criminal justice system for perpetrators [4, 5]. It also acts as a ‘signal’ of social disorder [6] to the wider population, affecting fear of crime and feelings of safety. Following a steady decline in all types of violence in the early 2000s, rates of police-recorded violence in the UK have trended upwards since 2013/14 [7]. In 2018, the Serious Violence Strategy [8] established “a new balance between prevention and effective law enforcement” (p.7) as the approach of the government of England and Wales to reducing violence, particularly violence involving young people. Since the publication of this strategy, over £800m has been allocated to finding and implementing effective violence prevention in England and Wales, often through the importation of interventions from the US. Although modest progress has been made in reducing violence in some areas [9], robust evidence of what works to reduce community violence in England and Wales remains elusive.

Focused deterrence is one of the most promising interventions for addressing community and youth violence [10, 11]. It has been implemented in dozens of jurisdictions across diverse urban settings, predominantly in the United States, with published evidence generally indicating positive effects in reducing serious violence [12, 13]. Similarities in the likely causes of community violence between the US and the UK mean that focused deterrence has the potential to be effective in the UK. However, past attempts to implement focused deterrence have met with implementation challenges [14] or inconclusive evidence of effectiveness [15, 16], and the importance of contextual differences in violence, policing, support services and community attitudes is not well understood. In particular, the prevalence of homicide and other serious violence in England and Wales—the rate of homicide is six times higher in the US [17]—means that there are likely to be fewer opportunities for intervention and rarer outcomes in England and Wales compared to the US.

Focused deterrence interventions are adapted to their location, but all include some combination of the following principles: (1) The intervention targets individuals and/or groups identified as being involved in serious violence in a community or area. These individuals are identified using intelligence from various official and community sources. They are informed that they are the focus of police attention and that various mechanisms will be activated, such as disruption to their daily activities and withdrawal of or reduction in statutory services, such as housing, should they continue to participate in violence. (2) They are also informed that, should they wish to desist from violence, a variety of support services will be made available to them to facilitate this desistance. (3) To underline the legitimacy and credibility of this message, it is conveyed by a combination of police, statutory and voluntary sector organisations, and community members in different formats. These three components—making violence less attractive through increased perceived probability of consequences, creating additional opportunities to desist from violence, and signalling community support for both components—bring together core theories of crime prevention that may have separate mechanisms but may also interact to affect violence outcomes.

Despite the promise of focused deterrence interventions, the empirical evidence support for its effectiveness is modest. Campbell systematic review of twenty-four studies [13] concluded that the intervention was associated with reduced serious violence but highlighted common methodological limitations of the literature. For example, none of the studies used a randomised allocation of treatment and control units, making them vulnerable to regression to the mean, and comparison groups were often other cities, or the same area at an earlier period, which may provide reliable counterfactuals. The risk of observed effects being influenced by regression to the mean is particularly acute in violence prevention interventions because they are often funded and implemented in response to an extreme level of violence in an area, which is inclined to return to less extreme levels. Most concerning is that many of these studies were statistically underpowered, arising from the use of groups/gangs or areas as the treatment unit and interventions being implemented in a single city or jurisdiction, meaning that any possible sample would be small. This statistical underpowering, combined with the other limitations, presents a heightened risk of Type 1 errors. In addition, the studies’ outcomes, units of allocation and analysis, and conclusions varied, making for a mixed and inconsistent picture of effectiveness. To improve understanding of the effectiveness of focused deterrence, evaluations must implement more robust study designs using larger sample sizes.

Aims

The study aims to evaluate the efficacy of focused deterrence as an intervention for reducing violent offending among individuals aged 14 and older who are at risk of violent offending in England. The study uses a multi-site parallel randomised controlled trial (RCT) design to assign participants to an intervention or a control group in a 1:1 ratio, with stratification based on age (adult/child status) and prior offending frequency. This design will facilitate a robust assessment of the intervention’s impact on the occurrence of violent offending within this demographic.

Methods and materials

Realist multisite randomised controlled trial

To achieve the aims of the study, we designed it as a realist randomised controlled trial. The design includes a summative impact, multisite two-arm randomised evaluation, a formative process evaluation of both delivery and fidelity of the intervention, and longitudinal qualitative accounts of the intervention experience.

As one of the first such studies to use individuals as treatment groups, no feasibility study, pilot, or effect estimate was available to inform our study design. To overcome this limitation, we conducted power simulations using pre-baseline outcome data for the trial cohort using a range of plausible effect sizes. These simulations indicated that no one city was likely to have sufficient availability of offenders or frequency of outcomes to permit a well-powered trial within the funding period of the project. Accordingly, the study was set up as a multisite trial to achieve a large enough sample size for a robust experimental design of individual-level interventions. The success of this approach depends on consistent intervention delivery and evaluation across all sites. However, achieving such consistency is challenging in complex community settings with multiple partners. The protocol for this evaluation has been preregistered [18] and it notes that manualised interventions are unlikely to be exactly replicated in different violence prevention contexts. Despite receiving comparable resources and guidance, the five sites designed similar, but not identical, interventions based on their local contexts.

Without a feasibility study for a multisite trial, this study includes careful process monitoring to check treatment fidelity across sites and tries to control intervention and evaluation processes as much as possible. Since the evaluation team is not responsible for the programme design or delivery, ongoing observation and assessment are vital to justify pooling study data. The process evaluation will check the consistency of delivery throughout the trial period.

Our divergence from group to individual treatment units represents a trade-off between the feasibility of achieving sufficient statistical power and the theory that a group-level mechanism is essential for a focused deterrence intervention to be effective. In effect, a randomised controlled trial using a group as the treatment unit is not possible within available resources and England and Wales population parameters (the remit of Youth Endowment Fund). The multisite design will let us assess the between-site variability of an intervention that, on paper, has the same core components. Combining this with a realist evaluation, we can then try to disentangle how implementation has varied. This can, however, be challenging for multi-agency interventions developed within complex environments.

Intervention sites

This multisite trial, spanning five English cities—Coventry, Leicester, Manchester, Nottingham, and Wolverhampton—implements seven distinct interventions, all structured around the YEF Focused Deterrence framework (see Table 1). While each intervention adheres to the same foundational aims and theoretical underpinnings, adaptations have been made to accommodate variations in local contexts, resource availability, and organisational structures. For a comprehensive understanding of each intervention’s unique and common elements, detailed accounts, structured following the TiDieR (Template for Intervention Description and Replication) framework, are available on the Open Science Framework: https://osf.io/pvnj2/.

Table 1. Seven interventions across five sites and four delivery teams.

Trial number Team Site Intervention name
1 Leicester, Leicestershire and Rutland Violence Reduction Network Leicester City The Phoenix Programme
2 Greater Manchester Combined Authority Manchester City Another Chance Manchester
3 Nottingham Violence Reduction Unit Nottingham City Another Way
4 West Midlands Police Coventry City CIRV Coventry high risk pathway
5 West Midlands Police Coventry City CIRV Coventry referral pathway
6 West Midlands Police Wolverhampton City CIRV Wolverhampton high risk pathway
7 West Midlands Police Wolverhampton City CIRV Wolverhampton referral pathway

Eligibility criteria and recruitment

Participants who might be eligible for the intervention are identified through police records and, in exceptional circumstances, may be referred through statutory services. A set of strictly defined eligibility criteria is then used by each site to identify potential participants from this pool. For example, in Nottingham, participants must:

  1. live within the defined boundaries for the site area;

  2. have group bonds formed from time spent in the area or have familial ties to the area;

  3. have been arrested for violence against a person, robbery, or weapon possession in the 12 months preceding the start of the intervention;

  4. have been arrested for one or more of the following offences as part of a group of three or more in the preceding 24 months: violence against the person, criminal damage and arson, robbery, drug offences, possession of weapon offences or public disorder.

Multi-agency programme panels then triage this shortlist of eligible participants at each site. The triaged list is also assessed for deconfliction (i.e., the removal or suspension of anyone subject to active police enforcement for organised crime activity) and the eligible cohort is then randomized into treatment and control groups.

Statistical power

In the absence of a feasibility or pilot study on which to base estimates of statistical power, we sought to simulate the trial and to deduce a range of required sample sizes based on plausible effect sizes.

Data

As part of the preparation phase, the five sites were asked to develop selection/eligibility criteria for the intervention, to identify the number of these individuals in their population, and to describe the number of police-recorded violence against the person offences attributed to them in a twelve-month period. This information allowed us to describe the distribution of outcomes (negative binomial) and to identify the anticipated number of individuals in treatment and control groups.

Effect size

As the focused deterrence literature is largely based on population-level treatment effects using quasi-experimental designs and a set of outcomes (e.g. firearm homicide) that are very rare in a UK context, the study could not anticipate an effect size using the existing literature. However, other intervention types with a population at high risk for violent offending have used violence outcomes and randomised designs. The most promising intervention in the YEF Toolkit [19] is Cognitive Behavioural Therapy. CBT interventions are associated with a 25% reduction relative to controls [20]. Using this statistic as a guide, we proposed that effects of 10%, 20%, 30% and 40% relative reductions in police-recorded violence against the person would be reasonable effect sizes.

Simulation

Based on this information, we created 10,000 simulated data sets with a similar distribution of the outcome for each of 32 combinations of effect size and sample size (see Table 2). Effect size was simulated by artificially manipulating the outcome for the treatment group and this was modelled using a negative binomial regression analysis. For each combination of effect size and sample size, the proportion of the 10,000 estimates of treatment effect that were statistically significant was stored and is visualised in Fig 1. Reproducible code for these analyses is included as a S1 File.

Table 2. Simulated data sets.
Relative reduction
n 0.9 0.8 0.7 0.6
100 0.0154 0.042 0.2275 0.3248
200 0.0231 0.091 0.4958 0.6393
300 0.029 0.1349 0.6803 0.8371
400 0.0354 0.1859 0.8255 0.9391
600 0.0348 0.2745 0.954 0.993
800 0.0397 0.3447 0.99 0.999
1700 0.04 0.6461 1 1
2500 0.0417 0.8334 1 1
Fig 1. Statistical power across three effect sizes.

Fig 1

The graph illustrates the relative importance of effect size (‘relative change’) and sample size to statistical power. Using these simulations, we concluded that it is unlikely that five sufficiently powered trials would be achievable, even in the most optimistic of treatment effects. Consequently, as the site interventions are largely homogenous—designed according to the same framework and with comparable populations—an option to pool the data from all five sites as a multi-centred trial was considered. This pooling would achieve a sample size of approximately 1,700. Based on this sample size, a relative reduction of 26% would be detectable in 80% of trials. Therefore, pooling was determined to be the best trade-off in terms of value for money and feasibility to detect a plausible effect.

Summative impact evaluation

The summative impact evaluation has three research questions that will be examined during the 12-month follow-up period per participant. For evaluation purposes, the start of the intervention is operationalised as the point of randomization. The end is set as either the graduation from the programme (i.e., the point at which the delivery team and the programme participant jointly agree that the individual is no longer benefitting from the programme) or as dropping out from the programme due to, for example, imprisonment or death. Since it is unlikely that each individual will become a part of the programme immediately after being randomized into the treatment group, time from randomization to first contact by the delivery team is recorded in the process data and will be accounted for in the analysis.

The summative impact evaluation questions are as follows:

  1. What is the difference in the number of violence against the person offences attributed to individuals receiving the focused deterrence intervention compared to those of similar individuals receiving business-as-usual support?

  2. What is the difference in the time to a violence against the person offence (in days) attributed to individuals receiving the intervention compared to those of similar individuals receiving business-as-usual support?

  3. What is the difference in the number of co-offending crimes (i.e., crimes involving two or more perpetrators) attributed to individuals receiving the intervention compared to those of similar individuals receiving business-as-usual support?

Randomisation

Eligible individuals within each site are randomised to treatment or control conditions on a 1:1 allocation ratio. Randomisation is stratified on (i) a three-point ordinal scale of frequency of offending in the past 24 months (derived from raw counts for each location but coarsened for randomisation to low, medium and high) and (ii) whether eligible participants are 18 years old and over or under 18 years of age. Stratification helps to reduce between-group differences at baseline, helping to increase statistical power with a fixed sample size [21].

Prior offending categories (tertiles) have been established from the first cohort of eligible individuals and are site-specific (i.e., grouping into low, medium and high might differ between sites). This method was chosen because it is transferrable across jurisdictions, easily accessible to analysts, and the data-generating process is relatively consistent across areas.

Our second stratification is whether a case is an adult (18 and over) or a child (under 18 years) since all sites expect a mix of such cases (noting that the minimum age is 14 years). This is necessary because statutory services in England generally treat children and adults differently, with deterrence and support provisions varying by child/adult status.

The evaluation team undertook the initial randomisation of long-listed eligible candidates as ‘batches’. Further randomisation will be undertaken on a case-by-case basis, as per a ‘trickle trial’ [22], using a dedicated randomisation platform that will record the individual’s unique ID, the site, their tertile of offending frequency, whether they are a child (under 18) at the time of randomisation, and treatment allocation. Records of treatment allocation are accessible to the trial’s delivery team and the evaluation team for data retention (trial duration plus 10 years). Random allocation of cases will be completed using the ‘randomizr’ package [23] for R v4.0.4 [24]. The sample code is recreated and available on the Open Science Framework: https://osf.io/pvnj2/.

Outcome measures

The primary outcome for the impact evaluation is the number of violence against the person offences attributed to an individual within one year of randomisation. The secondary outcomes are:

  1. the number of days between randomisation and a recorded violence against a person offence with a Police National Computer (PNC) disposal outcome relevant to the evaluation;

  2. when there is a co-offender, the number of recorded violence against the person offences attributed to an individual with a PNC disposal outcome relevant to the evaluation; and

  3. when there is a co-offender, the number of any recorded offences attributed to an individual with a PNC disposal outcome relevant to the evaluation.

For the purposes of this evaluation, PNC disposal outcomes relevant to the evaluation refer to all offences specified by the Home Office’s offence codes used in the court proceedings database as Violence Against the Person offences. Police-recorded violence against the person offences measure is the best outcome to evaluate the intervention’s effectiveness as it is a standard measure of violent behaviour [25]. Despite about half of all violent behaviour not being reported to the police, this rate decreases as the violence severity increases. Approximately 79% of violence against adults treated by a medical professional is captured, representing the most serious violent offending. However, reporting rates of violence against 10–15-year-olds are considerably lower, with less than 10% of all violence and 18% of violence resulting in medical treatment being reported [26].

These outcome offences will be tracked through an individual’s PNC record. PNC metadata will be used to identify cases and outcomes related to this measure. The primary outcome is determined from PNC records using the date of any eligible offence within the trial period. The evaluation team will create a dataset using the date of randomisation as the starting point and count the number of relevant incidents within one year of allocation. This approach was chosen to measure the primary outcome and address some of the known inconsistencies in the crime recording system and Home Office outcomes codes. Primary outcome data will be collected throughout the trial and analysed during the evaluation phase. Access to data and the secure transfer and processing of the data, as well as safe reporting of results, will be governed by data processing agreements between the research team and local police forces.

Analysis

Statistical significance tests will not be carried out to assess baseline balance, as their premise does not hold in randomised controlled trials (i.e., given that appropriate randomisation procedures were followed, any differences between control and treatment groups at baseline will be due to chance. See http://www.consort-statement.org/checklists/view/32-consort/510-baseline-data) [27]. Instead, tables of the pooled means (and standard deviations, where appropriate) for each characteristic and the magnitude of any differences explored will be presented. For skewed variables, quartile-based measures will be presented. In the statistical analysis plan, we will specify the details for assessing imbalance, which will set out criteria against variables used in randomisation and any putative time control variables used in our analysis to increase power (e.g., previous offending). We will also present balance visually—so, for previous offending, we will look at the distribution of offence counts by treatment and control.

Intention to treat analysis

Based on the project’s progress to date and our understanding of the proposed implementation, our analytical approach will pool individual data from all sites into a single analytical model. Our primary analysis will be based on intention-to-treat (ITT). That is, individuals will be analysed according to the group they are randomised to, regardless of whether they engaged with the intervention or remained in control. The ITT approach is particularly relevant for future policy-making stakeholders and practitioners who may roll out or implement a particular intervention without much control regarding how that intervention is taken up in the system. Therefore, the ITT approach allows for estimating the effects of offering that intervention and incorporates the imperfect uptake of that offer.

The statistical model for the ITT analysis is set out below in Eq 1. The model incorporates variables for between-site and over-time variation and variables used for stratification.

Y=α+1treatment+2offending frequency+3adult+4site+5year/month+ϵ (1)

In the equation, Y is the outcome—in this case, a count variable measuring the number of violent offences attributed to an individual in the twelve calendar months following the randomisation of an individual. The analysis approach will be based on count outcomes. We intend to use a zero-inflated Poisson (as reflected in our power simulations) because many individuals will likely not have further offences in the follow-up period of 12 months post-randomisation. 1[treatment] will be a binary variable where 0 = control and 1 = treatment, and the coefficient from this variable in the model will be the focal result for the project. 2[offending frequency] is one variable used for stratification in each site. We acknowledge that there is a difference between the model used in the power calculation and the analysis model presented here (S1 File). We will update the power calculations for our statistical analysis plan once we have sample data from sites to understand the likely outcome distribution (based on pre-intervention outcome data). This will be included in the analysis as n-1 dummy categories, with the reference category being the category with the largest number of observations from low, medium, or high offending frequency (note that not pre-specifying which category now will not affect the results). 3[adult] will be a binary variable for whether an individual is an adult (over 18) (= 1) or a child (under 18 years of age) (= 0). 4[site] will be site fixed effects—dummy variables for n-1 sites, again with the site with the largest number of observations overall as the reference category (anticipated to be either Coventry or Wolverhampton in the West Midlands). We include this variable because we know, a priori, that sites will differ in their eligibility criteria and selection processes. Hence, we need to account for between-site variation in our analysis. Finally, 5[year/month] will be a variable that captures the year and month since the start of the delivery period. This measure will capture seasonal variation and any esoteric shocks during delivery. This will be entered as a continuous variable, but if there are model convergence problems, we will aggregate this to year-quarter. We also acknowledge that it may be necessary to include another variable for offender sex, depending on the number of females included. If necessary, this would be a binary variable with 0 = male and 1 = female, determined by sex at birth, if possible, to determine this from available data.

For our analysis, we will use robust standard errors and calculate 95% confidence intervals based on those (the exact specification will be clarified in the statistical analysis plan). Standard error adjustment is sensible given that this helps in the event of model misspecification and heterogeneous treatment effects [28, 29]. The model specification will be the same for primary and secondary outcomes.

Multiple outcome testing

The study has a single primary outcome, as presented above. That will be the main result reported for the study and will not be adjusted for tests on secondary outcomes. We have secondary outcomes: (i) one on crime and (ii) two on co-offending. The study will adjust the co-offending analysis using the false-discovery rate [30]. All other analyses will be unadjusted. We know, a priori, that the study is not powered for sub-group analyses—any such analyses (along with anything not pre-specified) will be clearly labelled as exploratory.

Blinding during analysis

While the team are independently appointed evaluators, we will conduct analyses using masked treatment groups, so the analyst conducting the analysis will be blind to allocation.

Formative process evaluation

The formative process evaluation will use a realist approach to explain how the study worked, in what context and with what population groups. It will address nine formative research questions:

  1. To what extent were the three components of the intervention, as required by the YEF framework, received by the treatment population groups?

  2. How did inputs contribute to the intervention functioning?

  3. Who did the intervention work for, and how?

  4. How did local context affect intervention delivery?

  5. To what extent was the intervention delivered as intended?

  6. How did complexity affect intervention delivery?

  7. How did proximal outcomes change?

  8. Why did proximal outcomes change?

  9. What was learned from how the intervention was delivered?

The formative evaluation uses a mixed methods design, including qualitative, semi-structured cross-sectional and longitudinal interviews, and written researcher notes following the observations of regular meetings between the programme delivery team and programme participants. In addition, it uses quantitative, routine outcome performance monitoring deriving from administrative data. This includes a standardised set of criteria that the sites will collect as a part of the intervention and consists of, for example, the length of time needed for the participant to consent to being in the programme; the number of participants per delivery team member (community navigator/mentor); the number of meetings per participant; the length of being in the programme; the number of referrals to support services; and the number of referrals for enforcement. Sampling for interviews and observations varies, with researchers aiming for more individuals included in the first round of cross-sectional interviews and retaining as many as possible for repeat, longitudinal interviews and observations. Both interview and observation participants will be recruited with support from the delivery team, to reduce the possibility that participating in the study would discourage them from continuing the programme (e.g., out of fear that evaluators work with the police).

In this setting, high-quality logic models and context-mechanism-outcome (C-M-O) configurations are important to ensure the evaluability of the intervention. As shown in Table 3, these initial high-level C-M-O configurations will be used to test and develop a range of realist causal explanations that can be attributed to local ‘observable’ contexts. As an external expert (Professor Chris Bonnell, London School of Hygiene and Tropical Medicine, UK) recommended, the evaluation contains limited C-M-O configurations centred on generally accepted ‘big’ ideas relevant to focused deterrence. Those have been previously used in the literature to plausibly explain why targeted deterrence, the provision of support, and community voice and legitimacy may affect specific individuals in varied contexts. During the early implementation phase, a final list of C-M-O configurations will be compiled for evaluation during the full implementation phase. The final list of conditions, aligned to the YEF implementation framework, will likely depend on the availability of relevant data, resources and the viability of testing configurations.

Table 3. Initial high-level C-M-O configurations (v2.0).

Context + Mechanism = Proximal outcome(s)
Heterogeneity in local resource availability and deterrence activities [Targeted enforcement]
An increase in legitimate targeted deterrence activities affects perceptions of certainty of arrest and punishment
Normative and instrumental non-violent modifications and engagement with individualised support
Variance in resource allocation, coordination, engagement, collaborations and spectrum of available support services [Individualised support]
Increase in availability of improved individualised support packages provides pathways to desistance and increasing perceptions of benefits
Sustained engagement with pathways to desistance and reorientation towards legal and social norms
Decrease in levels of community confidence in local policing and statutory and non-statutory support services [Community validation]
Social amplification of community moral voice, characterised by peer and familial influences, informal social controls, collective efficacy, and communicating shared values and beliefs, affects normative compliance and behaviour modification
Increase in legitimacy and support for intervention and enforcement activities to achieve normative compliance

The formative evaluation mainly focusses on targeted deterrence, individualised support, community validation and the interactions between these mechanisms, and the summative outcomes focus on violent offending and involvement in group violence.

Data collection

The schedule for gathering data began in May 2023 and will run for at least two years. Various methods will be used to gather cross-sectional, longitudinal, and pre-post data from a variety of corroborative data sources. Data from the formative component will be thematically analysed to gain a shared understanding of how, why, and for whom the intervention worked in varied local contexts. The ability to delve deeply into the longitudinal qualitative data collected is a key advantage of the protocol design. Data collection methods include:

Semi-structured interviews

The experiences of programme participants, the delivery team, and stakeholders will be the focus of interviews. Due to the expected high dropout rates among programme participants, we will use convenience sampling and not limit the number of participants. We will offer vouchers equal to the UK living wage (~£15/h) as incentives to retain participants until the end of the intervention.

Observations

A standard observation protocol will be used to monitor participant and delivery team interactions, which will be influenced by the nine YEF FD framework criteria and each location’s unique intervention design. Researchers will document regular activities using structured and unstructured methods, noting discussion topics such as group dynamics, decision-making, available resources, and cultural contexts.

Administrative data

The delivery teams are responsible for gathering a wide range of intervention delivery and outcomes data. These data will help characterise the cohort, evaluate the balance between the intervention and control groups, outline participants’ journeys through the intervention, measure the degree of engagement, and detail when dropouts and completions occur. This information will be transferred to the evaluation team at regular intervals.

Discussion

This protocol outlines the process and impact evaluation methodology for the most extensive multisite evaluation of focused deterrence in the UK. Spanning five distinct sites with seven trials, the evaluation includes a cohort of approximately 2,000 individuals, marking it as the only multisite trial of focused deterrence. The protocol uses an integrated realist evaluation framework with a multi-site parallel RCT design to evaluate the efficacy of focused deterrence as an intervention for reducing violent offending among individuals aged 14 and older who are at risk of violent offending in England. To the authors’ knowledge, this will be the first multi-site impact and process evaluation of a UK-based violence reduction intervention. The anticipated findings will offer pivotal insights for formulating future violence prevention policies in the UK. They are also expected to contribute significantly to the corpus of literature on violence prevention and intervention evaluation.

Interim analysis and ‘backfire’ check

To mitigate the risk of iatrogenic effects, interim outcome analyses will be conducted roughly six months after the trial start date, defined as the date of first randomisation in any site. These analyses will aim to identify any adverse effects and/or clear evidence of harm, given the offending seriousness of the cohort involved, focusing on the direction and magnitude of the effects. However, it will not involve statistical analysis to avoid ‘alpha spending’ [31]. In the context of this intervention, harm is defined as a negative impact on offending (averaged across sites) where the reoffending prevalence in the treatment group is equal to or more than 10 percentage points greater than the control group prevalence (e.g., 20% in treatment, 10% in control).

Likewise, to act upon the potential risk of harm, we established a set of pre-specified limits or stopping rules that will inform the decision on the study continuation post the initial six-month period. These stopping rules allow for site-specific differences and avoid terminating the intervention in all sites if one site shows harmful effects during the interim analyses. The rules are:

  1. if the average impact is negative, the study will pause intake for one month to allow for a discussion of options and agreement on study progression

  2. if the average impact is positive, the threshold for roll-out to all participants will increase. To err on the side of caution, in this case, the reoffending prevalence would need to be 15 percentage points lower in the treatment group than control rather than 10 (e.g., 30% in control, 15% in treatment).

Ethics and confidentiality

This study has been reviewed and approved by the Faculty of Arts, Cultures and Education Ethics Committee of the University of Hull (ref. 2223STAFF14) with no modifications required. The ethics application included a participant information sheet for each population group (i.e., programme participants, delivery team, stakeholders) and each research method (i.e., interview, observation, survey), a consent form for each population and method, as well as an assent form for parents or carers of those under the age of 18. It further included draft interview schedules, observation themes, survey questions, and risk assessments.

To ensure the confidentiality of all participants, data used in all publications will be anonymised and reported in aggregate, where possible. If direct quotations are used, the researchers will take necessary precautions to ensure that participants cannot be identified. There will be no direct links between the study outcomes (i.e., police-recorded offences) and qualitative data gathered for evaluation purposes.

Data management

Data management and storage will comply with the UK Data Protection Act 2018 and YEF and University of Hull policies and procedures. To facilitate secure information transfer, we have access to encrypted CJSM accounts, and our team members have NPPV Level 2 security vetting. Data transfer, storage, and processing for summative evaluation will be done through the University of Hull’s Data Safe Haven. This system, and protocols for accessing and processing data, exceed the DSP tools standards required by NHS Digital for processing personal health data and is ISO 27001 certified, making it one of the most secure data facilities available.

Dissemination and knowledge transfer

Our knowledge transfer strategy extends beyond the final evaluation report. Our plan ensures that our key findings, impact, and process are widely shared and accessible. This involves multiple structured outputs, collectively designed, which will distil and present our insights to diverse audiences. These will be made available in open access academic journals and showcased at global academic and stakeholder conferences, allowing us to disseminate our findings and engage in valuable dialogue with peers and stakeholders. Such discussions often lead to new insights, future collaborations, and opportunities for iterative refinement of our work.

However, given the high vulnerability of the population included in this sample, specifically in the summative evaluation, and bound by the Data Sharing Agreements signed with each of the police forces included, the authors are unable to make the data fully available without restriction. Likewise, the qualitative data gathered is confidential and might include identifiable information. The consent form specifies that all the data will be presented in a pseudonymised and unidentifiable fashion. Thus, we are unable to make the raw data public.

To promote ongoing learning and dissemination of information throughout the project’s implementation, we already incorporated various mechanisms to regularly communicate with YEF, project delivery sites, and stakeholders. Some of the current outputs include strategies for formative and summative evaluation at the programme level, theories of change at the programme level, local monitoring and evaluation plans, and system health assessments. By sharing these outputs, we aim to contribute to a broader body of knowledge, enabling other researchers and practitioners to learn from our experiences and potentially apply our insights in their contexts.

Supporting information

S1 File. Power calculations.

The is the code to reproduce the power calculations.

(PDF)

pone.0301023.s001.pdf (126.4KB, pdf)

Acknowledgments

We acknowledge colleagues from the Youth Endowment Fund. We also acknowledge colleagues from each of the five sites for their ongoing support and contributions to the design and development of research questions for the process and impact evaluation that are part of this protocol.

Abbreviations

CMO

Context Mechanism Outcome

FD

Focused Deterrence

ITT

Intention To Treat

PNC

Police National Computer

RCT

Randomised Control Trial

TiDieR

Template for Intervention Description and Replication

YEF

Youth Endowment Fund

Data Availability

No datasets were generated or analysed during the current study. All relevant data from this study will be made available upon study completion.

Funding Statement

This work was supported by The Youth Endowment Fund under grant number 3928041.

References

  • 1.WHO. (2023). Youth Violence: Key Facts. World Health Organisation. https://www.who.int/news-room/fact-sheets/detail/youth-violence#:~:text=Key%20facts,of%20homicides%20involve%20male%20victims.
  • 2.David-Ferdon C., Clayton H.B., Dahlberg L.L., et al. (2021). Vital Signs: Prevalence of Multiple Forms of Violence and Increased Health Risk Behaviors and Conditions Among Youths—United States, 2019. MMWR Morbidity and Mortality Weekly Report, 70:167–173. doi: 10.15585/mmwr.mm7005a4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Cohen M.A. (2020). The costs of crime and justice. London: Routledge. [Google Scholar]
  • 4.Wilson D.B., Brennan I., and Olaghere A. (2018). Police-initiated Diversion for Youth to Prevent Future Delinquent Behavior: A Systematic Review. Campbell Systematic Reviews, 14(1):1–88. doi: 10.4073/csr.2018.5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Petrich D.M., Pratt T.C., Jonson C.L., and Cullen F.T. (2021). Custodial Sanctions and Reoffending: A Meta-Analytic Review. Crime and Justice, 50:353–424. doi: 10.1086/715100 [DOI] [Google Scholar]
  • 6.Innes M. (2014). Signal Crimes: Social Reactions to Crime, Disorder, and Control. Oxford: OUP Oxford [Google Scholar]
  • 7.Office for National Statistics (2022). The Nature of Violent Crime in England and Wales: Year ending March 2022. The nature of violent crime in England and Wales—Office for National Statistics (ons.gov.uk)
  • 8.HM Government (2018). Serious Violence Strategy. Home Office—Serious Violence Strategy, April 2018 (publishing.service.gov.uk)
  • 9.Home Office (2023). Violence Reduction Units Evaluation. Violence Reduction Units 2022 to 2023—GOV.UK (www.gov.uk)
  • 10.Abt T. (2018). Bleeding Out. Basic Books Bleeding Out by Thomas Abt | Hachette Book Group. [Google Scholar]
  • 11.College of Policing (2023). Crime reduction toolkit. College of Policing. Crime reduction toolkit | College of Policing
  • 12.Braga A.A., Weisburd D., and Turchan B. (2018). Focused Deterrence Strategies and Crime Control: An Updated Systematic Review and Meta-Analysis of the Empirical Evidence. Criminology and Public Policy, 17(1):205–250. doi: 10.1111/1745-9133.12353 [DOI] [Google Scholar]
  • 13.Braga A.A., Weisburd D., and Turchan B. (2019). Focused Deterrence Strategies Effects on Crime: A Systematic Review. Campbell Systematic Reviews, 15(3): e1051. doi: 10.1002/cl2.1051 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Davies T., Grossmith L., and Dawson P. (2016). Group Violence Intervention London: An Evaluation of the Shield Pilot. Mopac Evidence and Insight. Group Violence Intervention London. [Google Scholar]
  • 15.Williams D.J., Currie D., Linden W., and Donnelly P.D. (2014). Addressing gang-related violence in Glasgow: A preliminary pragmatic quasi-experimental evaluation of the Community Initiative to Reduce Violence (CIRV). Aggression and Violent Behavior, 19(6):686–691. doi: 10.1016/j.avb.2014.09.011 [DOI] [Google Scholar]
  • 16.Kerr, J., Wishart, R., Rantanen, K., DeMarco, J., and Turley, C. (2021). Vulnerability and Violent Crime Programme Evaluation of the Community Initiative to Reduce Violence (CIRV): Full technical report. College of Policing Limited. Vulnerability and Violent Crime Programme: Independent evaluation of Community Initiative to Reduce Violence (CIRV) (college.police.uk)
  • 17.United Nations Office on Drugs and Crime (UNODC) (2023). Victims of Intentional Homicide. dp-intentional-homicide-victims | dataUNODC
  • 18.Brennan, I., Simanovic, T., Sutherland, A., McFarlane, P., and Graham, W. (2023). Another Chance Fund Focused Deterrence programme: a multicentered randomised controlled trial. Evaluation protocol. 10.1186/ISRCTN11650008 [DOI]
  • 19.Youth Endowment Fund (2023). YEF Toolkit: An Overview of Existing Research on Approaches to Preventing Serious Youth Violence. Youth Endowment Fund Toolkit
  • 20.Farrington D.P., Gaffney H., and White H. (2022). Effectiveness of 12 Types of Interventions in Reducing Juvenile Offending and Antisocial Behaviour. Canadian Journal of Criminology and Criminal Justice, 64(4), 47–68. doi: 10.3138/cjccj.2022-0022 [DOI] [Google Scholar]
  • 21.Kahan B. C. and Morris T. P. (2012). Reporting and analysis of trials using stratified randomisation in leading medical journals: Review and reanalysis. BMJ, 345, e5840– e5840. doi: 10.1136/bmj.e5840 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Ariel, B., Bland, M., and Sutherland, A. (2022). Experimental Designs. SAGE. Experimental Designs | SAGE Publications Ltd.
  • 23.Coppock, A. (2023). _randomizr: Easy-to-Use Tools for Common Forms of Random Assignment and Sampling_. R package version 1.0.0. CRAN—Package randomizr (r-project.org)
  • 24.R Core Team. (2023). _R: A Language and Environment for Statistical Computing_. R Foundation for Statistical Computing, Vienna, Austria. R: The R Project for Statistical Computing (r-project.org) [Google Scholar]
  • 25.Home Office. (2023). Home Office Crime Recording Rules for frontline officers & staff. Home Office. Home Office Crime Recording Rules for frontline officers & staff—GOV.UK (www.gov.uk)
  • 26.Brennan, I.R. (2019). Victims of serious violence in England and Wales, 2011–2017. College of Policing. Victims of serious violence in England and Wales, 2011–2017 (college.police.uk)
  • 27.Moher D., Hopewell S., Schulz K.F., Montori V., Gøtzsche P.C., Devereaux P.J., et al. (2010). CONSORT 2010 Explanation and Elaboration: Updated guidelines for reporting parallel group randomised trials. Journal of Clinical Epidemiology, 63: e1–37. doi: 10.1016/j.jclinepi.2010.03.004 [DOI] [PubMed] [Google Scholar]
  • 28.Cunningham S. (2021). Causal Inference: The Mixtape. Yale University Press. doi: 10.2307/j.ctv1c29t27 [DOI] [Google Scholar]
  • 29.White H. (1980). A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity. Econometrica, 48(4):817–838. doi: 10.2307/1912934 [DOI] [Google Scholar]
  • 30.Benjamini Y. and Hochberg Y. (1995). Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical Society: Series B (Methodological), 57: 289–300. doi: 10.1111/j.2517-6161.1995.tb02031.x [DOI] [Google Scholar]
  • 31.Meurer W.J. and Tolles J. (2021). Interim Analyses During Group Sequential Clinical Trials. JAMA, 326(15), 1524–1525. https://jamanetwork.com/journals/jama/fullarticle/2784821. doi: 10.1001/jama.2021.10174 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Nickolas Zaller

21 Dec 2023

PONE-D-23-36345Focussed deterrence: A protocol for a realist multisite randomised controlled trial for evaluating a violence prevention intervention in the UKPLOS ONE

Dear Dr. McFarlane,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Feb 04 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Nickolas Zaller

Academic Editor

PLOS ONE

Journal Requirements:

1. When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for stating in your Funding Statement: 

This work was supported by The Youth Endowment Fund under grant number 3928041.

Please provide an amended statement that declares all the funding or sources of support (whether external or internal to your organization) received during this study, as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now.  Please also include the statement “There was no additional external funding received for this study.” in your updated Funding Statement. 

Please include your amended Funding Statement within your cover letter. We will change the online submission form on your behalf.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Does the manuscript provide a valid rationale for the proposed study, with clearly identified and justified research questions?

The research question outlined is expected to address a valid academic problem or topic and contribute to the base of knowledge in the field.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Is the protocol technically sound and planned in a manner that will lead to a meaningful outcome and allow testing the stated hypotheses?

The manuscript should describe the methods in sufficient detail to prevent undisclosed flexibility in the experimental procedure or analysis pipeline, including sufficient outcome-neutral conditions (e.g. necessary controls, absence of floor or ceiling effects) to test the proposed hypotheses and a statistical power analysis where applicable. As there may be aspects of the methodology and analysis which can only be refined once the work is undertaken, authors should outline potential assumptions and explicitly describe what aspects of the proposed analyses, if any, are exploratory.

Reviewer #1: Partly

Reviewer #2: Yes

**********

3. Is the methodology feasible and described in sufficient detail to allow the work to be replicable?

Descriptions of methods and materials in the protocol should be reported in sufficient detail for another researcher to reproduce all experiments and analyses. The protocol should describe the appropriate controls, sample size calculations, and replication needed to ensure that the data are robust and reproducible.

Reviewer #1: No

Reviewer #2: Yes

**********

4. Have the authors described where all data underlying the findings will be made available when the study is complete?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception, at the time of publication. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above and, if applicable, provide comments about issues authors must address before this protocol can be accepted for publication. You may also include additional comments for the author, including concerns about research or publication ethics.

You may also provide optional suggestions and comments to authors that they might find helpful in planning their study.

(Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you to the authors for their dedication to the field and development of this very important work. This is a foundation of the implementation of a program, however does lack much of the data needed for rationale in power and statistical analysis.

In the introduction, I would prefer to see more epidemiology focus on actual UK data, as opposed to citing US data. For a program that is based in the UK, it is a challenge to cite this as a public health problem from the US surgeon general. Data from the UK, especially in types of violence would be imperative in laying the foundation in the introduction as to the need for the program in the UK. Policy differences are important to discuss and also raise enough disparities between communities that may impact your background in the justification of the use of the type of the program. This was not addressed here.

For the methods, you discuss the rarity of the violent events, but you do not give any number of events in the region. This would be helpful for understanding the expansiveness of the problem. For this study, even if you do not have any data, you could report out how many violent events occurred in the previous 5 years in each location. This would assist in learning if the sites selected would be able to meet the expected enrollment. This study states it is key on having adequate sample size. There are no sample size or power calculations listed, however this is stated several times. Please include these. You only include the sample size in the abstract, but not anywhere in your manuscript, along with the calculations. Please include these and explain rationale for the calculation. Overall, the explanations tend to not be clear. You are not explaining what you will be analyzing where, and your description of mixed methods evaluation is not clear as well. Maybe a table to supplement your planned analyses, or restructuring the section to have it grouped to better explain you plans.

Discussion: in your discussion was the first time you address sample size within the actual manuscript (outside of the abstract)- however having that data earlier would be important, as well as clarifying goal sample sizes for each site. It would be helpful to have a figure about your timeline which will add more clarity to your proposed study activities. What are your anticipated outcomes? You discuss that you plan to share on global stages however what are your anticipated outcomes and what barriers do you expect to encounter?

Overall the general organization of the manuscript can be improved to allow better understanding of the study and planned statistical approach.

There are no references included -- will need these with choices for sample size calculation, and in your introduction as well.

Reviewer #2: Summative Impact evaluation: What is the intervention date for these research questions? At randomization? Authors should clarify.

Outcome measures: Q2 and Q3 could use some clarification about what "PNC disposal outcome relevant to the evaluation" means. Could the authors provide an example?

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2024 Mar 28;19(3):e0301023. doi: 10.1371/journal.pone.0301023.r002

Author response to Decision Letter 0


15 Feb 2024

Please see attached response to reviewer comments in revised submission

Attachment

Submitted filename: Response to Reviewers.pdf

pone.0301023.s002.pdf (106.8KB, pdf)

Decision Letter 1

Nickolas Zaller

11 Mar 2024

Focussed deterrence: A protocol for a realist multisite randomised controlled trial for evaluating a violence prevention intervention in the UK

PONE-D-23-36345R1

Dear Dr. McFarlane,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at http://www.editorialmanager.com/pone/ and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Nickolas Zaller

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Does the manuscript provide a valid rationale for the proposed study, with clearly identified and justified research questions?

The research question outlined is expected to address a valid academic problem or topic and contribute to the base of knowledge in the field.

Reviewer #1: Yes

**********

2. Is the protocol technically sound and planned in a manner that will lead to a meaningful outcome and allow testing the stated hypotheses?

The manuscript should describe the methods in sufficient detail to prevent undisclosed flexibility in the experimental procedure or analysis pipeline, including sufficient outcome-neutral conditions (e.g. necessary controls, absence of floor or ceiling effects) to test the proposed hypotheses and a statistical power analysis where applicable. As there may be aspects of the methodology and analysis which can only be refined once the work is undertaken, authors should outline potential assumptions and explicitly describe what aspects of the proposed analyses, if any, are exploratory.

Reviewer #1: Yes

**********

3. Is the methodology feasible and described in sufficient detail to allow the work to be replicable?

Descriptions of methods and materials in the protocol should be reported in sufficient detail for another researcher to reproduce all experiments and analyses. The protocol should describe the appropriate controls, sample size calculations, and replication needed to ensure that the data are robust and reproducible.

Reviewer #1: Yes

**********

4. Have the authors described where all data underlying the findings will be made available when the study is complete?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception, at the time of publication. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above and, if applicable, provide comments about issues authors must address before this protocol can be accepted for publication. You may also include additional comments for the author, including concerns about research or publication ethics.

You may also provide optional suggestions and comments to authors that they might find helpful in planning their study.

(Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have done a nice job with the revisions.

The introduction relates it well to the prospective site for the intervention, and the study design is more clear.

Thank you for the thoughtful revisions

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

**********

Acceptance letter

Nickolas Zaller

18 Mar 2024

PONE-D-23-36345R1

PLOS ONE

Dear Dr. McFarlane,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Nickolas Zaller

%CORR_ED_EDITOR_ROLE%

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Power calculations.

    The is the code to reproduce the power calculations.

    (PDF)

    pone.0301023.s001.pdf (126.4KB, pdf)
    Attachment

    Submitted filename: Response to Reviewers.pdf

    pone.0301023.s002.pdf (106.8KB, pdf)

    Data Availability Statement

    No datasets were generated or analysed during the current study. All relevant data from this study will be made available upon study completion.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES