Abstract
This study compares prevention program registries in current use on their level of support for users seeking to implement evidence-based programs. Despite the importance of registries as intermediaries between researchers and the public, and although previous studies have examined how registries define their standards for methodological soundness and evidence of efficacy, little research has focused on the degree to which registries consider programs’ dissemination readiness. The result is that registry users are uncertain whether listed programs and their necessary support materials are even available for implementation. This study evaluates 11 publicly and privately funded prevention registries that review the evidence base of programs seeking to improve child health and prosocial outcomes on the degree to which they use dissemination readiness as an evidentiary criterion for rating programs, and the extent and type of information they provide about dissemination readiness to support real-world implementation. The results show wide variability, with few having standards about dissemination readiness or making evidence-based information about interventions easily accessible to users. Findings indicate the need for registries to (1) do more to assess dissemination readiness before including programs on their website and (2) offer more complete information on dissemination readiness and implementation support to users.
Keywords: systematic review, program implementation, design and evaluation of programs and policies, scale-up, evidence-based programs, “What works” registries, dissemination and implementation
A number of programs and practices have been tested using rigorous scientific methods and shown to prevent behavioral health problems (Catalano et al., 2012; National Research Council and Institute of Medicine [NCIM], 2009; Sandler et al., 2014). Many practitioners, clinicians, educators, funders, and administrators search for these evidence-based interventions (EBIs) on web-based registries, or “information sources that aggregate, standardize, review and rate the evidence base of interventions, acting as repositories that provide input into the decision-making process” (Neuhoff et al., 2015, p. 11). The large number of registries—up to 20 within the United States alone (Burkhardt et al., 2015; Means et al., 2015)—indicates their importance as intermediaries between researchers and program users.
Several reviews of registries (also known as “clearinghouses”) have discussed their standards for determining EBI effectiveness; for example, their criteria for rating a program’s evaluation design and outcomes (Fagan & Buchanan, 2016; Means et al., 2015). Some registries promote high standards, requiring (1) a theoretical rationale/logic model, (2) true experimental evidence of effectiveness, that is, randomized control trials, (3) replication, (4) sustainability of effects, and (5) an absence of harmful effects. These criteria are consistent with evaluation guidelines recommended by the NCIM of the National Academies (2009) and the Society for Prevention Research (Gottfredson et al., 2015). Other registries employ less rigorous criteria, with minimum thresholds used to denote interventions as effective and not at all requiring an experimental design, replication, and/or sustained effects. Registries may also vary as to whether they conduct a systematic review of all possible evaluations or a limited search of the literature (Fagan & Buchanan, 2016). The conclusion is that different registries have different standards and methods for rating EBI effectiveness. Harmonizing procedures and coding categories for determining evaluation quality is a major challenge the field faces but has begun to address (Neuhoff et al., 2015).
Equally important, but subject to far less attention, is comparing if and how registries rate and provide information about the degree to which well-evaluated programs can be well implemented. Likewise, although a substantial proportion of the prevention science literature focuses on evaluating the implementation quality and effectiveness of particular EBIs, less attention has been paid to the importance of dissemination readiness—or the degree to which an EBI is available to be implemented as designed and has the organizational capacity to provide materials, training, and information for potential users to adopt and implement the EBI with fidelity. The goal of this article is to fill this knowledge gap by examining if and how the dissemination readiness of EBIs has been defined, evaluated, and communicated to potential users across registries. To do so, this article compares different evidence-based registries on (1) the degree to which they use dissemination readiness as a criterion for inclusion/exclusion of EBIs and (2) the extent of information and support they provide about dissemination readiness to facilitate real-world implementation. Based on this analysis, we offer recommendations for how registries can help promote EBI take up by considering dissemination readiness.
The sample for the current study consists of public and private registries identifying evidence-based programs, where programs are defined as manualized interventions designed to change specific risk and protective factors linked by an established theory to a targeted outcome with experimental evidence demonstrating effectiveness (Elliott et al., in press; Elliott & Fagan, 2017). Some registries include or are limited to evidence-based practices, defined as interventions using common, generic, or similar approaches for which there is some variation in how these types of practices are conceptualized (Elliot & Fagan, 2017; Lipsey, 2018). We excluded these evidence-based practices because they require less specificity in the activities and procedures involved in the implementation and are more difficult to precisely and consistently assess for dissemination readiness than evidence-based programs (Elliot et al., in press).
Challenges in Scaling-Up EBIs
To date, many EBIs have not been widely disseminated, which reduces their potential to improve public health and well-being at scale (Fagan et al., 2020; Hawkins et al., 2015; Spoth et al., 2013). The dissemination and implementation science literature has posited many explanations for the significant gap between scientific advances in the development and testing of EBIs and their use in communities. The potential for EBI scale-up typically depends on the characteristics of the EBIs themselves (e.g., their complexity and requirements), the organizations within which EBIs are implemented (e.g., their infrastructure, leadership, and resources), and the larger community context (e.g., knowledge of and support for EBIs; Damschroder et al., 2009; Fagan et al., 2020; Glasgow & Emmons, 2007; Hawkins et al., 2015; Mendel et al., 2008).
A variety of frameworks have been developed to describe these barriers and identify solutions to improve the dissemination and implementation of EBIs (Bowen & Zwi, 2005; Milat et al., 2015; Neta et al., 2015; Nilsen, 2015; Tabak et al., 2012). The current study is guided by the Interactive Systems Framework (ISF; see Wandersman et al., 2008). Among the many factors that affect EBI scale-up, the ISF emphasizes the lack of knowledge among practitioners that such interventions exist, an inadequate understanding among these groups about how and why EBIs are effective, and limited organizational infrastructure and supports to facilitate EBI implementation in communities. According to Wandersman et al. (2008), researchers, organizations, and prevention practitioners must work together to increase EBI scale-up. Specifically, the ISF promotes coordinated activities across three systems: (1) a Prevention Synthesis and Translation System (including prevention registries), which summarizes and disseminates information about EBIs in user-friendly formats; (2) a Prevention Support System, which provides training and technical assistance to EBI users; and (3) a Prevention Delivery System, which actually implements EBIs in community settings (Wandersman et al., 2008).
A Prevention Synthesis and Translation System is essential to increase public knowledge about EBIs, especially given the increasing volume and scientific complexity of literature evaluating the effectiveness of programs and practices (Bastian et al., 2010). Moreover, few practitioners have the training in scientific methods to understand evaluation literature and most do not or cannot access information about EBI effectiveness from scientific journals (Brownson, Fielding, & Green, 2018). Even when practitioners are aware that specific EBIs exist and recognize the value of using EBIs with a sound methodological foundation, they may not understand all the requirements needed to fully implement them. As Wandersman et al. (2008, p. 175) note, “For example, the journal articles and textbooks describing research do not contain enough detail on the content and implementation of innovations.” To translate research into practice, implementation requirements must be synthesized and translated for users, a task that could be taken by evidence-based registries.
Although providing accessible information is crucial, users of EBIs often need direct assistance to support EBI implementation. The Prevention Support System (Wandersman et al., 2008) can help address the challenges of delivering EBIs with fidelity. This system includes assistance from developers to provide training, ongoing coaching, and tools to monitor implementation. Recognizing these needs, some EBI developers have created formal companies, often referred to as purveyor organizations (Fixsen et al., 2013; Neuhoff et al., 2017). Purveyors may establish short- or long-term relationships with implementing organizations to deliver initial training workshops, provide ongoing training and/or coaching, and/or assist with collecting, analyzing, and using data on implementation to improve EBI delivery (Franks & Bory, 2015). Although the degree to which these relationships actually enhance EBI implementation and effectiveness has not been subject to much formal evaluation, numerous studies have found that receipt of training and technical assistance, implementation monitoring, and continuous quality improvement all increase implementation quality and the likelihood that EBIs will result in anticipated improvements in behavioral health (Durlak & DuPre, 2008; Fixsen et al., 2005; Katz & Wandersman, 2016; Leeman et al., 2015).
EBIs are delivered within a social, organizational, and community setting, that is, the Prevention Delivery System. Such a system may lack the supports and infrastructure needed for high-quality implementation, particularly the financial and human resources required for EBI delivery. In addition, administrators may not prioritize EBI delivery, the organization may not have an adequate number of qualified staff to deliver EBIs or supervise implementers, and systems for monitoring EBI implementation may be lacking (Aarons et al., 2011; Damschroder et al., 2009). Noting the interaction across systems, Wandersman et al. (2008) suggest that Prevention Delivery depends in part on the two other systems. Synthesizing and translating information in the Prevent Support System and providing support and assistance in the Prevention Delivery System can enhance organizational and staff capacity to deliver EBIs.
The Role of Evidence-Based Registries
Evidence-based registries are a critical component of the Prevention Synthesis and Translation System (Wandersman et al., 2008). They aid in assessing applied research and evaluation studies of interventions according to evidentiary standards so that potential users can best decide which interventions to support or adopt for implementation (Means et al., 2015). To this end, they seek to demystify evaluation literature, make it accessible to practitioners, and raise awareness about the existence of EBIs. Registries also, to varying degrees, aim to encourage the use of interventions that have evidence of effectiveness and discourage the use of interventions that have not been well evaluated or that have been shown to be harmful.
The increasing number of available EBIs has led to a corresponding increase in the number of evidence-based registries that provide information about EBIs (Paulsell et al., 2017). According to one review (Burkhardt et al., 2015), in 2014, there were 20 registries within the United States that had “documentable criteria for including or excluding programs” intended to improve behavioral health outcomes. These registries have helped raise awareness that EBIs exist and, to some extent, have helped increase EBI dissemination (Baum et al., 2013; Hallfors et al., 2007; Neuhoff et al., 2015; Novins et al., 2013). At the same time, they have been critiqued on several grounds. Prior studies have emphasized that vast disparities exist across registries in the standards and processes used to rate EBIs evaluation quality and effectiveness (Fagan & Buchanan, 2016; Hallfors et al., 2007; Weiss et al., 2008). Registries that do not have rigorous scientific standards or rely on the diverse application of the standards by reviewers may be seen as identifying “effective” interventions despite lacking strong evidence of impact, which can undermine confidence in the utility of EBIs (Elliott & Fagan, 2017). Moreover, the variation in standards can be confusing to the public, especially because this variation can result in different EBIs being identified as effective on different lists—as many studies have concluded (Fagan & Buchanan, 2016; Means et al., 2015).
Some have also criticized registries for failing to provide the types of information most needed by users, such as implementation costs and requirements as well as the “how” and “why” EBIs work (Horne, 2017; Neuhoff et al., 2015). For example, users need to know how the intervention is implemented. Users also need documentation of the core intervention content and components, staffing requirements, recommended delivery context(s), intensity and/or frequency of delivery, and acceptable and unacceptable adaptations to these guidelines. Information about acceptable and/or recommended adaptation to these requirements could also be provided to assist users in tailoring the program to their population and/or context. To describe “why” programs work, registries could provide information about the intervention’s theoretical rationale and theory of change or logic model that identify the intended outcomes and components or mechanisms that produce outcomes. All this information should help ensure that EBIs are implemented as intended.
Relevant to the current article, users are likely to benefit from content about an EBI’s dissemination readiness; however, registries may or may not provide this information (Fagan & Buchanan, 2016; Paulsell et al., 2017). In fact, some registries include interventions that are not available for adoption; for example, EBIs that may have been well evaluated and shown to be effective may not have been formalized to the degree that communities could replicate the activities designed to improve outcomes. Although the primary and/or sole purpose of some registries may be to rate only effectiveness, criteria and information about dissemination readiness are also important to guide users’ adoption of an EBI. The degree to which registries provide such content is unclear and the goal of this article is to fill this gap in the knowledge base.
The lack of attention to and information about dissemination readiness is important, given that poor implementation quality can undermine EBI effectiveness (Durlak & DuPre, 2008; Lipsey, 2018). Further, the complexity of an intervention has been linked to its implementation quality (Damschroder et al., 2009; Greenhalgh et al., 2004; Rohrbach et al., 2006), and research indicates that even “simple” EBIs may be difficult to replicate with integrity. However, when interventions are continually monitored for fidelity, and when program developers and/or purveyors provide proactive support, high-quality implementation and positive program effectiveness can be achieved (Durlak & DuPre, 2008; Elliott & Mihalic, 2004; Fixsen et al., 2005; Kemp, 2016; Mihalic et al., 2008; Rhoades et al., 2012; Welsh et al., 2010). In contrast, when external supports are inadequate or lacking, implementation challenges are likely to arise and threaten EBI effectiveness.
Surveys of program developers and purveyors have indicated variation in the degree to which they can provide community agencies with the supports and resources necessary for high-quality implementation (Elliott & Mihalic, 2004; Franks & Bory, 2015; Neuhoff et al., 2017). This research has found that many EBI developers lack the capacity to fully assist community agencies and staff with EBI implementation (Brownson et al., 2018; Fagan et al., 2020; Franks & Bory, 2015; Neuhoff et al., 2017). Paulsell et al. (2017) note how registries have played an increasingly important role in disseminating information about EBIs, becoming a trusted source for administrators and practitioners seeking to implement EBIs. It therefore seems logical to conclude that registries serve as a liaison between users and EBIs and that this role can extend beyond the identification of “what works” to include “what is ready for dissemination.” To the extent that registries expect the information they provide to contribute to the adoption of programs that work rather than the adoption of those without good evidence, they will want to provide specific information to help users implement the best programs for their goals. However, there has been little systematic investigation of how well evidence-based registries collect and communicate information about dissemination readiness, despite the significant function these Prevention Synthesis and Translation systems serve in educating the public about EBIs.
To our knowledge, Paulsell and colleagues (2017) have conducted the only prior review on this topic. Their research differs from the current study, however, in several ways. Their study (1) examined federal registries, whereas we consider public and private registries; (2) compared the degree to which registries provided consistent information on EBI effectiveness, external validity, and dissemination readiness (which they term “feasibility of implementation”), whereas we concentrate and provide more data on the last factor, including whether or not registries use dissemination readiness as a criterion for excluding/including EBIs; and (3) analyzed information from databases prior to September 2016 (the date of the online publication), whereas our review is more recent and takes into account recent changes to existing registries. The current study thus provides an updated examination of the extent to which both public and private registries take dissemination readiness into account, including whether or not they use dissemination readiness as a criterion for inclusion/exclusion of EBIs and the degree to which they provide users with information on dissemination readiness.
Method
Sample
This article involves the review of public and private registries that identify evidence-based programs. Six inclusion criteria define our sample. First, the sample includes registries of evidence-based programs that have documentable, evaluative inclusion, or exclusion criteria for rating those interventions. The registries must examine individual program evaluations. We exclude those relying solely on systematic reviews and meta-analyses that mainly identify evidence-based practices such as the Community Preventive Services (The Community Guide) or the Cochrane Database of Systematic Reviews. Second, the registries must be web based and active at the time when this analysis was conducted, with regular updating, summaries of evaluation results, and program information presented in clear and easy-to-understand formats. Consistent with Burkhardt et al. (2015, p. 93), online registries are chosen over printed listings (i.e., lists or reports) as “decision makers would likely access searchable web-based resources in preference to static printed materials, which can quickly become out of date.” Third, to ensure objective, consistent reviews, the registries must make certification decisions based on multiple raters with knowledge of research evaluation methods, provide raters with training in the standards used to determine effectiveness, and require the use of standardized, structured rating instruments. Fourth, the registries must review evidence-based programs designed to prevent negative behavioral health outcomes such as mental health conditions, substance use, delinquency/crime, and other behaviors that bear on health. We also include registries that focus on positive development, including academic achievement and prosocial behaviors. Given our focus on prevention, we exclude registries with a sole focus on treatment or implementation issues such as participant engagement. Fifth, the registries must be original rather than rely on another registry for their content (e.g., the Results First Clearinghouse Database, the Office of Juvenile Justice and Delinquency Prevention’s Model Programs Guide, and the What Works in Prisoner Reentry Clearinghouse rely on reviews by other registries). Sixth, the present analysis reviews registries based in the United States that house information on programs developed and distributed across the United States. Table 1 lists the 11 registries meeting our inclusion criteria. The table also identifies the funding source (public vs. private), purpose/mission, and (where available) intended audience as reported on each registry’s website.
Table 1.
Sample of Registries Meeting Inclusion Criteria.
| Name | Funding | Purpose, Mission, Intended Audience (as Reported by Each Registry) |
|---|---|---|
| Blueprints for Healthy Youth Development | Privatea | Identifies scientifically proven and scalable interventions that prevent or reduce the likelihood of antisocial behavior and promote a healthy course of youth development. |
| California Evidence-Based Clearinghouse for Child Welfare (CEBC) | Publicb | Provides a searchable database of programs that can be utilized by professionals that serve children and families involved with the child welfare system. |
| Compendium of Evidence-Based Interventions and Best Practices for HIV Prevention | Public (HHS) | Identifies evidence-based interventions that show evidence of efficacy in changing sex or drug-injection behaviors that directly impact HIV transmission risk. |
| Crime Solutions | Public (DOJ) | Assesses the strength of the evidence about whether interventions achieve criminal justice, juvenile justice, and crime victim services outcomes. |
| Evidence for ESSA | Privatec | Examines K–12 reading and math programs through the lens of the federal education law, Every Student Succeeds Act of 2014, to help schools find programs that meet ESSA standards. |
| Home Visiting Evidence of Effectiveness (HomVEE) | Public (HHS) | Provides information about which home visiting models have evidence of effectiveness as defined by HHS, as well as detailed information about the samples of families who participated in the research, the outcomes measured, and the implementation guidelines for each model. |
| Social Programs that Work | Privatea | Identifies social programs shown in rigorous studies to produce sizable, sustained benefits so that they can be deployed to help solve social problems. |
| Teen Pregnancy Prevention Evidence Review (TPPER) | Public (HHS) | Identifies programs with evidence of effectiveness in reducing teen pregnancy, sexually transmitted diseases, and other sexual risk behaviors. |
| Title IV-E Prevention Services Clearinghouse (PSC) | Public (HHS) | Conducts an objective and transparent review of research on programs and services intended to provide enhanced support to children and families and prevent foster care placements. |
| What Works Clearinghouse (WWC) | Public (DOE) | Provides educators with the information they need to make evidence-based decisions using results from high-quality research to answer the question “What works in education?” |
| Washington State Institute for Public Policy (WSIPP) | Publicd | Conducts systematic reviews of programs implemented across the United States to identify effective policy options, compares the benefits and costs (specific to Washington State), and assesses whether a policy option will (at a minimum) break even financially. Topic areas include (but are not limited to) education, criminal justice, health, and general government. |
Note. n = 11. HHS = U.S. Department of Health and Human Services; DOJ = U.S. Department of Justice; DOE = U.S. Department of Education.
aCurrently funded by Arnold Ventures, a private foundation. bCalifornia Department of Social Services’ Office of Child Abuse Prevention. cCurrently funded by the Annie E. Casey Foundation, a private foundation. dEstablished through specific funding from the Washington Legislature in the 1983–1985 biennial budget in the appropriation for The Evergreen State College.
Analysis
At least two of the study authors reviewed the implementation information available to potential users for each of the included registries. This involved a systematic qualitative review of all 11 websites including each registry’s homepage, subpages, and linked documents. Materials were reviewed for a random sample of 20 programs listed by each registry to assess (1) the existence of a formal requirement for dissemination readiness and (2) the presence of details communicating information about dissemination readiness to potential program users. The reviewers reached consensus on the conclusions reported below. To guide the review, we used seven broad categories associated with dissemination readiness that fall within the three general constructs of the ISF (Wandersman et al., 2008): (1) availability of the program and its key activities, including a contact person and access to updated materials; (2) implementation support, such as materials, training, and fidelity measures; and (3) costs associated with delivery, in terms of both dollars and human resources. Table 2 provides a description of these codes.
Table 2.
Dissemination Readiness Coding Criteria.
| Criteria | Description | Standard |
|---|---|---|
| Activities defined | There is a clear description of the activities of the intervention. | Availability |
| Accessibility | The version of the intervention that met standards for evaluation quality is currently available for sites wishing to implement it. At a minimum, there is a contact person with knowledge of the intervention’s theoretical rationale and activities that were evaluated and shown to improve outcomes. Ideally, there is an up-to-date website and program materials can be ordered. | Availability |
| Materials | There is a curriculum, protocols, and/or explicit implementation procedures with instructions that specify the intervention content and guide the implementation of the intervention. Ideally, this includes a manual or series of manuals specifying in detail what the intervention comprises. | Implementation support |
| Fidelity measures | There is an established system for monitoring implementation fidelity including fidelity measures. Ideally, this includes measures to assess pre/post changes in outcomes. | Implementation support |
| Training | There are levels of formal training or qualifications for those delivering the intervention. Ideally, this includes training provided by certified instructors. It also ideally includes ongoing technical assistance and coaching. | Implementation support |
| Financial resources | The financial resources required to deliver the intervention are specified. Ideally, there is a description of costs associated with implementing the program, including start-up costs; intervention implementation costs; intervention implementation support costs such as technical assistance and training; and costs associated with fidelity monitoring and evaluation. A breakdown of cost for these separate components, when appropriate, is identified. | Costs |
| Human resources | There is reported information on the human resources required to deliver the intervention. Ideally, there is a description of staff resources needed to deliver the intervention, including required staff ratios, the required level of qualifications and skills for staff, and the amount of time they will need to allocate (to cover delivery, training, supervision, preparation, and travel). | Costs |
Results
The first objective of this article is to determine whether registries used dissemination readiness as a criterion for inclusion/exclusion of EBIs on their websites. Based on our review, the 11 registries can be classified into three categories: (1) no dissemination readiness requirement for inclusion/exclusion of EBIs, (2) partial inclusion/exclusion requirement for dissemination readiness, and (3) formal inclusion/exclusion requirement for dissemination readiness. Following this determination, we rated the registries on the seven broad components of dissemination readiness listed in Table 2. Results of these analyses are described in the next sections.
No Dissemination Readiness Requirement
The Washington State Institute for Public Policy (WSIPP) lists many interventions with information on the benefits and costs of programs and policies, but it differs in important ways from other registries. First, its mission is to carry out practical, nonpartisan research in the direction of the Washington State legislature rather than offer accessible information about EBIs to potential users. Second, it makes no effort to list programs available for community adoption. Rather, WSIPP conducts systematic reviews of programs implemented across the United States to identify effective policy options, compares the benefits and costs of each policy option (specific to Washington State), and assesses whether a policy option will (at a minimum) break even financially. WSIPP thus does not have a formal criterion of dissemination readiness for inclusion/exclusion of prevention programs, and a review of 20 listed programs on the website shows that it offers no information on dissemination readiness other than a brief description of the program.
Social Programs That Work does not list dissemination readiness as a criterion in determining whether a program receives a Top Tier, Near Top Tier, or Suggestive Tier rating, which correspond to the level of evidence the registry assigns for programs in producing desired effects if implemented faithfully in settings and populations similar to those in the original studies. Based on a review of 20 listed programs, the website provides a brief description of each program and its activities. Most but not all programs also have cost estimates and a website for further information.
The Compendium of Evidence-Based Interventions and Best Practices for HIV Prevention applies what is termed the risk reduction (RR) efficacy criteria to designate studies as best evidence or good evidence. These criteria “focus on quality of study design, quality of study implementation and analysis, and strength of evidence of efficacy.” Documents on the RR Review Methods and RR Efficacy Criteria do not mention any of the components of implementation. For example, authors are contacted to obtain critical information on intervention efficacy that is missing or in need of clarification but not for the details of dissemination readiness.
A review of a sample of 20 programs shows five (or one quarter) that are listed with the note “An intervention package is not available at this time” and an email address of the study author. The listing of these programs indicates that the registry does not have a dissemination readiness requirement. The other 15 sampled programs note that an intervention package is available and list a website. Beyond a brief program description and identification of the preferred characteristics of program deliverers, the program summaries contain no information on materials, fidelity support, training, financial costs, or human resources.
Crime Solutions includes a component in the program ratings that asks reviewers to rate whether there is a full and thorough description that “should serve as a guide for the implementation of the program.” However, that component does not translate into a requirement for dissemination readiness. Details on key program components that relate to dissemination readiness are part of the rating for a conceptual framework domain. For an overall rating, the conceptual framework domain is combined with three other domains on design quality, outcome evidence, and program fidelity. Based on the rating system, the dissemination readiness component represents only a small part of the total rating. Further, the rating does not consider the availability of the program, access to materials, fidelity monitoring, training, costs, or staffing, and programs that are not currently “active” or available for use are listed on the registry. To that extent, the registry does not require dissemination readiness in its inclusion/exclusion criteria.
A review of a subsample of 10 Effective programs and 10 Promising programs shows that all include a description of program components or activities, although the details provided vary widely. However, about 15% (i.e., three of the 20 programs sampled) are listed as “not active” and likely limited in dissemination readiness. Only 13 of the 20 programs have information on availability in the form of a contact person or website and only 6 of the 20 programs offer cost information. There are no sections on the website that describe program materials, fidelity, training, or staffing, and few of the program descriptions mention these dissemination topics.
What Works Clearinghouse (WWC) has no requirement for dissemination readiness. The screening process defines eligibility for review based on effects, language, time frame, originality, research design, sample, and outcome but does not mention availability or access to the program. The review process focuses on the methodological requirements of evaluation studies, as described in the Procedures and Standards Handbook, which also does not mention the readiness of the program for adoption or replication. The website lists programs that meet WWC standards without reservation or with reservations based on the methodological requirements, but within these categories, it lists programs both with and without evidence of potentially positive effects. The focus of our review is on those with potentially positive effects (i.e., those listed on the website with a colored icon) but note that some of these programs lack a study showing a statistically significant positive effect.
A review of 20 programs listed on the website reveals that about half provide details related to dissemination readiness. An intervention report for each program has largely complete information on program activities, contact details, and cost. The reports often include a disclaimer noting that the information comes from publicly available sources such as websites and available research, but they also mention a standard developer contact process. Three of the program reports note that the contact person did not respond to a request to review the information, and five of the reports lack details on costs. Only 10 give information on materials and training, and none offered guidance on implementation fidelity measures. Additional summaries of individual studies meeting WWC standards include more information on implementation support. However, they describe only the study authors’ actions to ensure implementation fidelity during the evaluation rather than what is available to help potential users implement the program with fidelity.
Translating the format of the WWC reports into ratings on the seven criteria listed in Table 2 shows some gaps. A description of program activities is available for all programs, and most but not all list a contact person. Most but not all programs have cost information, and about half have details on needed materials and available training. Details on human resources and/or staffing are often lacking, perhaps because teachers will deliver most programs without additional cost. There is no information on fidelity support, although fidelity checks used by the studies are sometimes listed.
California Evidence-Based Clearinghouse for Child Welfare (CEBC) comes close to having a formal dissemination readiness requirement. The screening process for potential programs indicates that it selects those with a manual, a legitimate contact person, and information about the program provided by the contact person on a detailed questionnaire—all indicators of dissemination readiness. However, many programs (60 of the total 462 programs to date or 13%) are still listed on the website even if a contact person has not completed the questionnaire. According to the CEBC website, programs that fall into this category have a shortened outline (compiled by CEBC staff from publicly available sources). These programs appear not to be ready for dissemination. The website is unique in offering a wide variety of general resources related to implementation, but most of the information does not relate to a specific program.
A review of 20 programs on the website shows that most have detailed information relevant to implementation but with some gaps. Each program—except for the nonresponders—has subsections on essential program components, program delivery, education and training, implementation information, and contact information. These subsections describe program activities, availability, materials, and training. They sometimes include details on fidelity measures but never provide information on costs associated with adopting the program, though a subsection titled “Resources Needed to Run Program” typically has information on needed supplies and space. The staff needs are often implicit in the description of program activities but are not always explicitly laid out.
Partial Dissemination Readiness Requirement
Evidence for Every Student Succeeds Act (ESSA) has no formal requirement for dissemination readiness, though its Standards and Procedures document states that only studies that use a form of a program that could “in principle be replicated” are included for review. In addition, the site indicates that the registry evaluates only studies of programs that are currently available to schools in the United States. The registry lists provider contact information and costs as well as general information on staffing requirements, professional development/training, and technology needed to implement the programs listed on its website. Otherwise, Evidence for ESSA is limited to school programs that have been rated as having strong, moderate, or promising evidence based on the design and strength of evidence on outcomes.
A review of a sample of 20 programs listed on the website shows that about 13 (65%) have relatively complete information on dissemination readiness. The webpages for these programs include material on availability (e.g., contact person, website), training, cost, staffing requirements, and materials (as part of training and cost). They lack only information on fidelity monitoring. The other seven (35%) programs are less complete, listing only the program activities. In addition, the website notes the following:
For all programs that meet the top three ESSA evidence standards, we are asking developers to provide information and tips for success, identify ambassador schools, and recruit principals and teachers to write about their experiences implementing the program. It will take some time to gather this information, so please be patient and check back regularly.
Teen Pregnancy Prevention Evidence Review (TPPER) does not formally require dissemination readiness, but the review criteria take components of dissemination readiness into account. According to Review Protocol 5.0, programs that meet review criteria for evidence of effectiveness are assessed for “implementation readiness” using existing materials and documents and input from developers and distributors. With this information, the review team calculates an “implementation readiness” score ranging from 0 to 8, with higher scores indicating the programs most ready to be implemented. The score is based on yes/no answers to questions posed in three general areas: (1) curriculum and materials (Has defined curriculum with lesson plans and/or activities? Has defined core or required components? Has facilitator’s guide or instructions?); (2) training and staff support (Formal preimplementation training by qualified trainers available? Supplemental training or ongoing technical support available?); and (3) fidelity monitoring tools and resources (Has defined logic model? Defines fidelity guidelines and benchmarks? Offers monitoring and evaluation tools?). However, cost information is not factored into the implementation readiness score.
A sample of 20 programs listed on the website shows that, out of a maximum implementation readiness score of 8, nine have scores of 8, seven have scores of 7, two have scores of 6, one has a score of 5, and one has a score of 3. The most common gaps under implementation readiness relate to a lack of information on fidelity monitoring tools and financial and human resources. Along with the score, the webpage lists relatively detailed information on the key components of implementation readiness: program components and activities, contact information, materials and resources, staffing requirements, training support, and fidelity monitoring. The registry lacks details on cost but lists contacts and webpages for pricing information.
Home Visiting Evidence of Effectiveness (HomVEE) website does not list dissemination readiness as an evidentiary criterion, but nearly all 20 sampled programs rated as having evidence of effectiveness include detailed information on dissemination readiness. For effective programs, the registry team gathers information from impact and implementation studies and conducts internet searches to find implementation materials and guidance available from developers and national program offices. Program developers or contacts are invited to review and comment on the profiles before their release.
The website provides detailed implementation profiles for each rated program, both those with and without evidence of effectiveness. Each profile contains information on the model (e.g., components, length, and intensity), prerequisites (e.g., staffing and supervision requirements), training (e.g., materials, trainers, technical assistance), materials and forms (e.g., curriculum, fidelity measurement), and estimated costs (e.g., labor, materials, training). Each profile also provides a contact person, website, and list of recommended readings. The information covers all seven key components of dissemination readiness in Table 2.
A review of specific programs that were rated as having evidence of effectiveness shows largely complete information. In a subsample of 10 of the 20 effective programs, 8 cover all the listed areas. One is missing only cost information and another is listed with the note “Implementation support is not currently available for the model as reviewed,” and its implementation profile is missing information. For a subsample of 10 of the 26 programs not having evidence of effectiveness, the implementation profiles are less complete, but such information is less important since programs are not evidence based.
Despite not having a formal requirement for dissemination readiness, HomVee provides such thorough and complete implementation information for nearly all the programs listed on its website that it meets the requirement in practice. One program is listed as not currently available, but the gap is prominently noted. For all other programs, the implementation profiles contain more information than any of the other registries discussed so far.
Formal Dissemination Readiness Requirement
The Title IV-E Prevention Services Clearinghouse formally requires dissemination readiness for the programs it includes on the website. According to its Handbook of Standards and Procedures, programs and services must be currently available, “must have available written protocols, manuals, or other documentation that describe how to implement or administer the practice,” and the “protocols, manuals, or other documentation must be available to the public to download, request, or purchase.” These requirements cover training, certification, or other prerequisites needed to access manuals and documentation. In addition, reviewers determining whether the program should be included note available supports for implementation and fidelity.
At the time of the present review, this registry had just been developed, in accordance with the Family First Prevention Services Act of 2018. Thus, the website lists only 12 programs, and three of those are rated as “Does not currently meet criteria” due to poorly designed evaluations. For dissemination readiness of each of the 12 programs, the website briefly describes the program activities and, under “Program or Service Delivery and Implementation” offers partially complete information. The section has very brief details on education, certification, training books, manuals, other documentation, and contact information. There is no information on fidelity measures, financial resource costs, or human resource costs.
Blueprints for Healthy Youth Development lists dissemination readiness as one of four formal requirements for meeting its standard of evidence (the others are evaluation quality, intervention impact, and intervention specificity). The dissemination readiness dimension is evaluated only for programs that satisfy the other three requirements. Programs that meet the Blueprints standard of evidence must be available for use. According to the dissemination readiness criteria, this means they have materials or instructions “that specify the intervention content and guide the implementation of the intervention,” such as manuals, training, and technical assistance, and (where available) specify costs associated with implementation (such as those for start-up, implementation, and support) and staff resources (e.g., staff qualifications and time commitments) needed to deliver the intervention. Blueprints certifies programs that meet their four standards as Model/Model Plus or Promising.
The Fact Sheet for each U.S.-based certified program on the Blueprints website lists contact information, describes program activities, and has developer-provided details on training and technical assistance. When available, certified programs also have benefits and cost figures as calculated by WSIPP.
In addition, the webpage for each U.S.-based certified program contains information on costs and funding strategies (where available). Under Start-Up Costs, there is information on costs for initial training and technical assistance, curriculum materials, licensing, and other activities. Under Intervention and Implementation Costs, there is information on ongoing curriculum and materials, staffing, and other activities. Under Implementation Support and Fidelity Monitoring Costs, there is information on costs for ongoing training and technical assistance, fidelity monitoring and evaluation, ongoing licensing fees, and other activities. Finally, there is a One-Year Cost Example that gives specific figures. The information on Funding Strategies presents possible sources of funding for program implementation.
A review of a subsample of 20 U.S.-based programs certified by Blueprints shows that all have complete information on all, but one of the dissemination readiness criteria described in Table 2. All 20 of the sampled programs have contact details plus general information on training and technical assistance, funding, costs, and materials. Four of the 20 sampled programs have only sparse information on fidelity monitoring, however.
Registry Comparison
Table 3 summarizes the results of the two aspects of dissemination readiness considered in this review. First, 2 of the 11 registries (18%) meet the standard of having a formal requirement for dissemination readiness: Title IV-E Prevention Services Clearinghouse and Blueprints for Healthy Youth Development. Just over one quarter (n = 3, or 27%) of the registries have a partial requirement of dissemination readiness. For example, HomVEE comes very close in practice to having a requirement but does not have a formal standard. The California Evidence-Based Clearinghouse typically (but not always) offers detailed information on dissemination readiness. The majority (n = 6, or 54% of the sample), however, have no dissemination readiness standard for inclusion/exclusion of an EBI.
Table 3.
Summary of Dissemination Readiness Requirement and Information on Dissemination Provided by Each Registry.
| Registry | Dissemination Readiness Required | Activities Defined | Accessibility | Materials | Fidelity Measures | Training | Financial Resources | Human Resources |
|---|---|---|---|---|---|---|---|---|
| Washington State Institute for Public Policy (WSIPP) | No |
|
|
|
|
|
|
|
| Social Programs That Work | No |
|
|
|
|
|
|
|
| Compendium of EBI and Best Practices for HIV Prevention | No |
|
|
|
|
|
|
|
| Crime Solutions | No |
|
|
|
|
|
|
|
| What Works Clearinghouse | No |
|
|
|
|
|
|
|
| California Evidence-Based Clearinghouse for Child Welfare | No |
|
|
|
|
|
|
|
| Evidence for ESSA | Partial |
|
|
|
|
|
|
|
| Teen Pregnancy Prevention Evidence Reviews | Partial |
|
|
|
|
|
|
|
| Home Visiting Evidence of Effectiveness | Partial |
|
|
|
|
|
|
|
| Title IV-E Prevention Services Clearinghouse | Yes |
|
|
|
|
|
|
|
| Blueprints for Healthy Youth Development | Yes |
|
|
|
|
|
|
|
Note.
= Not Provided;
= Sometimes Provided;
= Complete Information.
Second, registries varied considerably on the completeness of information provided on the seven key components of dissemination readiness listed in Table 2. The most overlooked is information on fidelity measures, financial resource costs, and human resource costs. The WWC and California Evidence-Based Clearinghouse present much information on dissemination readiness but have some gaps. Only TPPER, HomVEE, and Blueprints for Healthy Youth Development provide complete or nearly complete information. Taking into account both issues, only Blueprints has a requirement for dissemination readiness and provides relatively complete information relevant to dissemination readiness, though TPPER and HomVEE registries come close in the amount of content provided on the dissemination readiness of programs listed on their websites.
Discussion
Evidence-based prevention registries translate for a lay audience complicated information on the scientific evidence of an intervention’s body of evidence. The websites these registries maintain have the capacity to present information in accessible, clear, and easy-to-understand formats, listing the program activities and other implementation details, the outcomes that have been changed by the intervention, and the setting(s) and/or demographic characteristics of participants targeted by the intervention. All this information can guide users through assessing needs, fit, and capacity to select an EBI appropriate for their circumstances (Paulsell et al., 2017). Registries can therefore play a critical role in the adoption of EBIs, helping policy makers, practitioners, and funders make informed decisions about their investment in preventive interventions.
Despite the importance of the registries as intermediaries between researchers and the public (Neuhoff et al., 2015; Paulsell et al., 2017) and despite the attention of previous research on how the registries define the standards for methodological soundness and evidence of efficacy (e.g., Fagan & Buchanan, 2016; Means et al., 2015), there has been little study of the degree to which registries have clear and consistent guidelines or standards to evaluate the dissemination readiness of EBIs and make this information available to users. The present study fills this research gap by rating 11 web-based registries that review the evidence base of programs seeking to improve child health and prosocial outcomes on (1) the degree to which they use dissemination readiness as an evidentiary criterion for listing programs on their website and (2) the extent of information related to dissemination readiness they provide to support real-world implementation of the EBIs.
This study is unique in that we focused on the extent to which registries use dissemination readiness as a criterion for inclusion/exclusion of EBIs. Our results show that 2 of the 11 reviewed have a formal requirement for dissemination readiness (Title IV-E Prevention Services Clearinghouse and Blueprints for Healthy Youth Development). Meanwhile, just over one quarter have a partial requirement, and over half have no dissemination readiness standard. The findings suggest that many registries list some programs on their website that may have well-conducted evaluation studies showing positive impact but that may not actually be available for adoption by interested users. We also found wide variability in terms of the completeness of the information provided on the requirements or resources needed to effectively implement EBIs. This finding is consistent with Paulsell et al. (2017), who conducted a study similar in method and purpose by comparing the degree to which eight federal registries provided consistent information on EBI dissemination readiness (or “feasibility of implementation”). Like the present study, Paulsell et al. also found that registries vary in the degree to which they require that EBIs have supports such as implementation manuals, training, technical assistance, and fidelity criteria or assessment tools. In addition, Paulsell et al. found that registries differed in how they gather and provide information about dissemination readiness.
Only Blueprints for Healthy Youth Development has both a requirement for dissemination readiness and provides relatively complete information relevant to dissemination readiness, though TPPER and HomVEE come close in the amount of content provided on the dissemination readiness of the programs listed on their registries. It is important to note, however, that unlike the teen and home visiting registries, Blueprints reviews a diverse set of programs that aim to improve a diverse set of outcomes, have substantially different implementation procedures, and require more detailed dissemination information.
The lack of attention to dissemination readiness is perhaps not surprising, given that the primary and/or sole goal of registries is to evaluate EBI effectiveness and make this information available to the public. Although registries may view dissemination readiness as a separate issue that falls outside of their scope, we disagree. Adding a dissemination readiness standard and review process seems to be a logical and very practical next step for registries to take, given that their users must consider factors beyond program effectiveness when selecting an intervention (Paulsell et al., 2017). Moreover, there is strong evidence that (1) many communities do not replicate prevention programs perfectly, (2) poor implementation quality can compromise program effectiveness, and (3) the provision of training and technical support can increase implementation fidelity and bolster desired outcomes (Durlak & DuPre, 2008; Fixsen et al., 2005; Katz & Wandersman, 2016; Leeman et al., 2015). Allowing users to compare the levels of implementation support that EBI developers and purveyors can provide will enhance the ability of registries to support EBI take up and high-quality implementation.
We recognize that adding clear and consistent policies and standards for rating dissemination readiness will undoubtedly reduce the number of EBIs listed on a registry’s website since not all programs demonstrated to be effective will have the capacity to be disseminated. We also recognize that rating dissemination readiness will place additional burden on registries. Staff will need to gather such details from published reports and/or program developers and purveyors, and such information may be incomplete or difficult to obtain, a reality noted in our review of several registries. Moreover, information about dissemination readiness may need to be periodically updated, and it will need to be conveyed to users in an easy-to-understand format, which may require some redesign of the registry websites. Despite these challenges, increasing attention to dissemination readiness is important. As Paulsell and colleagues (2017, p. 71) note,
reporting consistent information about readiness for scale-up sends a strong signal to model developers and purveyors about the level of support they should be prepared to offer users and what it means to fully develop an intervention that is ready to be scaled up.
Registries that include information about dissemination readiness not only help potential users but also can foster change within the program developer community and hopefully increase the amount of support and materials that developers provide to users.
Registries might consider providing additional guidance to potential program users that transcends specific programs. For example, the What Works Clearinghouse produces “practice guides,” which present general recommendations for educators to address challenges in their classrooms and schools. These recommendations are based not on specific programs, but rather on reviews of research, the experiences of practitioners, and the expert opinions of a panel of nationally recognized experts. Similarly, Haegerich and colleagues (2017) recommend that in addition to providing information about effective programs, registries might provide “technical packages” that provide detailed guidance on how to implement and evaluate a limited number or package of EBIs. The Centers for Disease Control has produced a number of such guides to support violence prevention efforts (https://www.cdc.gov/violenceprevention/pub/technical-packages.html). Registries might also consider ways to create “learning communities” by fostering discussion or meetings between EBI users via webinars or in-person conferences. For example, Blueprints for Healthy Youth Development hosts a biennial conference that brings together practitioners, policy makers, funders, developers, and evaluators to provide guidance and tools that enhance the adoption, implementation, and sustainability of EBIs listed on its website, though such information can also be applied to programs listed on other registries. The conference also provides information on implementation challenges and the importance of implementing EBIs with fidelity.
All these recommendations, however, rely upon a commitment from funders to invest in the dissemination readiness of EBIs. That is, funding is needed for developers to manualize their programs and provide robust dissemination materials (Fagan et al., 2020). For example, in addition to funding rigorous evaluations of fully developed programs, the Institute of Education Sciences (the evaluation arm of the U.S. Department of Education) provides competitive grants to develop and pilot test new or revised interventions that aim to impact learner outcomes. And as this article points out, supporting user decisions about adopting EBIs entails more than identifying potentially relevant evidence through literature searches and evaluating each study’s evaluation quality (i.e., internal validity). Additional levels of funding are therefore also needed for registries to develop and adopt dissemination readiness standards so that EBIs can be implemented with fidelity and achieve their intended outcomes. One example is the Annie E. Casey Foundation, which provided funding to Blueprints for Healthy Youth Development to develop the tools the registry currently uses to gather information on dissemination readiness and historically paid contractors to collect this information as part of its Evidence2Success initiative (https://www.aecf.org/work/community-change/evidence2success/). Funding, however, is indeed a challenge, for as Burkhardt et al. (2015) point out, a number of registries are administered by particular government agencies and must fulfill particular priorities; as a result, they may not be able to change their focus or review methods. Furthermore, funding agencies and private foundations have finite resources and must determine how to effectively allocate resources. We argue, however, that prioritizing dissemination readiness of EBIs is an important consideration and can increase the potential of registries to improve public health and well-being at scale.
Limitations of the Study and Recommendations for Future Research
This review of registries’ attention to dissemination readiness was intentionally limited to their assessment of evidence-based programs, given that standards for defining the dissemination readiness of an evidence-based program are more straightforward compared to evidence-based practices. A well-specified program often has implementation guidance, training, and other supports for delivering the model with fidelity, and these supports can be tracked and coded (as described in Table 2). In contrast to evidence-based programs, evidence-based practices encompass varied procedures and implementation strategies, often leaving issues of dosage and other specifications unclear or open to interpretation and thus difficult to code. Given the variation, practices often lack information about costs and other resource requirements. Thus, criteria for determining the dissemination readiness of evidence-based practice are different (and more difficult to define) than is the case for evidence-based programs. Considering the dissemination readiness of evidence-based practice, however, is an emerging topic and one that is likely to gain in prominence since practices can be adopted without the need to purchase materials and training from a developer or vendor.
Another limitation of this study is the lack of reliable/valid scales for quantifying the dissemination readiness of an intervention for community adoption. The current study relied on qualitative assessments of a registry’s approach to considering dissemination, including the codes listed in Table 2. As such, our criteria involve some judgment, and their application has not been reliably standardized. Future research should explore ways to develop quantitative measures. Such work might use the components listed in Table 2 as a starting point or the components and rating developed by National Registry of Evidence-based Programs and Practices (NREPP) (which is no longer active), as its numeric rating of readiness for scale-up has not been validated. Although a daunting task, synthesizing complex dissemination and implementation information into a concise and accessible rating may be helpful for users, especially if consensus can be reached about which elements of scalability are most important (Paulsell et al., 2017). This requires additional research in developing reliable and valid metrics for quantifying and studying the dissemination readiness of an EBI.
A third limitation stems from the focus on registries located in the United States. As in the United States, there is a growing number of prevention registries in Europe. These include, for example, the xChange Prevention Registry, which is associated with the European Monitoring Center for Drugs and Drug Addiction (https://www.emcdda.europa.eu/best-practice/xchange), the Grüne Liste Prävention (Prevention Green List; https://www.gruene-liste-praevention.de/nano.cms/datenbank/information) in Germany, the Databank Effectieve Jeugdinterventies (Effective Youth Intervention Database, https://www.nji.nl/nl/Databank/Databank-Effectieve-Jeugdinterventies) in the Netherlands, and the Early Intervention Foundation Guidebook in the UK (https://guidebook.eif.org.uk/). Many of these registries appear to evince the same mixed attention to dissemination readiness as those reviewed in this article, but a formal evaluation of this issue has not been conducted. Drawing on lessons learned in the present article, future research should examine the consistency in standards, particularly with respect to dissemination readiness, across registries in use in Europe to establish a consistent set of shared scientific prevention standards internationally.
Conclusion
Staff from state and local agencies and teachers and school district administrators who receive public funds to implement a range of education and social programs are increasingly looking to registries for guidance about which EBIs best meet the needs of their target populations. As such, registries are taking on more prominence (Paulsell et al., 2017). This article describes the similarities and differences in determining the readiness of an EBI for implementation in communities, schools, and public agencies, with special attention to how dissemination readiness factors into the evidentiary criteria. Web-based registries have developed strong criteria and methods for assessing study quality (i.e., internal validity) and evidence of effectiveness reported at the study level. Developing additional tools and/or standards for identifying the dissemination readiness of EBIs, for example, by considering their external validity, implementation supports, and scalability, will provide critical information to guide users’ adoption decisions. This is an important message for all—including funders, practitioners, developers, purveyors, researchers, and registry staff—to help realize the promise of EBIs in addressing social problems and improving outcomes for populations of interest.
Acknowledgments
The authors would like to thank Jennifer Balliet, Jennifer C. Cole, and Charleen J. Gust for invaluable coding during the screening phase of this project.
Author Biographies
Pamela R. Buckley is a research associate and Fellow in the Institute of Behavioral Science at the University of Colorado Boulder and Director and Co-Principal Investigator of Blueprints for Healthy Youth Development. Dr. Buckley’s research interests include statistics, intervention research, and evaluation methodology and she conducts policy-relevant research.
Abigail A. Fagan is a professor in the Department of Sociology and Criminology & Law at the University of Florida. Dr. Fagan’s research focuses on the etiology and prevention of adolescent substance use, delinquency, and violence, with an emphasis on examining the ways in which scientific advances can be successfully translated into effective crime and delinquency prevention practices.
Fred C. Pampel is a research professor of Sociology and Senior Research Associate at the Institute of Behavioral Science at the University of Colorado Boulder. Dr. Pampel’s research interests include health inequality, tobacco use, and statistical methods for evaluation research.
Karl G. Hill is the principal investigator of Blueprints for Healthy Youth Development, and directs the Program on Problem Behavior and Positive Youth Development at the University of Colorado Boulder. Over the last thirty years his work has focused on two key questions: What are optimal family, peer, school and community environments that encourage healthy youth and adult development? And How do we work with communities to make this happen? In addition, he has focused on developing and testing interventions to shape these outcomes, and on working with communities to improve youth development and to break intergenerational cycles of problem behavior.
Authors’ Note: The views and opinions expressed here are solely those of the authors and may not be attributed to the Foundation.
Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported with a grant from Arnold Ventures.
ORCID iD: Pamela R. Buckley
https://orcid.org/0000-0002-8268-3524
References
- Aarons G. A., Hurlburt M., Horwitz S. M. (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health, 38, 4–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bastian H., Glasziou P., Chalmers I. (2010). Seventy-five trials and eleven systematic reviews a day: How will we ever keep up? PLOS Medicine, 7, 9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baum K., Blakeslee K. M., Llyod J., Petrosino A. (2013). Violence prevention: Moving from evidence to implementation. Institute of Medicine, National Academies of Science. [Google Scholar]
- Bowen S., Zwi A. B. (2005). Pathways to “evidence-informed” policy and practice: A framework for action. PLOS Medicine, 2(7), e166. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brownson C. A., Allen P., Yang S. C., Bass K., Brownson R. C. (2018). Scaling up evidence-based public health training. Preventing Chronic Disease, 15, 180315. 10.5888/pcd15.180315 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brownson R. C., Fielding J. E., Green L. W. (2018). Building capacity for evidence-based public health: Reconciling the pulls of practice and the push of research. Annual Review of Public Health, 39, 27–53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Burkhardt J. T., Schroter D. C., Magura S., Means S. N., Coryn C. L. S. (2015). An overview of evidence-based program registers (EBPRs) for behavioral health. Evaluation and Program Planning, 48, 92–99. 10.1016/j.evalprogplan.2014.09.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Catalano R. F., Fagan A. A., Gavin L. E., Greenberg M. T., Irwin J., Charles E., Ross D. A., Shek D. T. L. (2012). Worldwide application of prevention science in adolescent health. Lancet, 379, 1653–1664. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Damschroder L. J., Aron D. C., Keith R. E., Kirsh S. R., Alexander J. A., Lowery J. C. (2009). Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science, 4, 50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Durlak J. A., DuPre E. P. (2008). Implementation matters: A review of the research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41(3–4), 327–350. [DOI] [PubMed] [Google Scholar]
- Elliott D. E., Buckley P., Gottfredson D. C., Hawkins J. D., Tolan P. (in press). Evidence-based programs and practices: Assessing the potential effectiveness of these interventions for improving juvenile justice system outcomes. Criminology & Public Policy. [Google Scholar]
- Elliott D. S., Fagan A. A. (2017). The prevention of crime. Wiley Blackwell. [Google Scholar]
- Elliott D. S., Mihalic S. (2004). Issues in disseminating and replicating effective prevention programs. Prevention Science, 5(1), 47–53. 10.1023/B:PREV.0000013981.28071.52 [DOI] [PubMed] [Google Scholar]
- Fagan A. A., Buchanan M. (2016). What works in crime prevention? Comparison and critical review of three crime prevention registries. Criminology and Public Policy, 15(3), 617–649. [Google Scholar]
- Fagan A. A., Bumbarger B. K., Barth R. P., Bradshaw C. P., Cooper B. R., Supplee L. H., Walker D. K. (2020). Scaling-up evidence-based interventions in U.S. public systems to prevent behavioral health problems: Challenges and opportunities. Prevention Science, 20, 1147–1168. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fixsen D. L., Blase K. A., Metz A., Van Dyke M. (2013). Statewide implementation of evidence-based programs. Exceptional Children, 79(2), 213–230. [Google Scholar]
- Fixsen D. L., Naoom S. F., Blase K. A., Friedman R. M., Wallace F. (2005). Implementation research: A synthesis of the literature. University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication #231). https://nirn.fpg.unc.edu/sites/nirn.fpg.unc.edu/files/resources/NIRN-MonographFull-01-2005.pdf [Google Scholar]
- Franks R. P., Bory C. T. (2015). Who supports the successful implementation and sustainability of evidence-based practices? Defining and understanding the roles of intermediary and purveyor organizations. In McCoy K. P., Diana A. (Eds.), The science, and art, of program dissemination: Strategies, successes, and challenges: New directions for child and adolescent development (Vol. 149, pp. 41–56). John Wiley & Sons. [DOI] [PubMed] [Google Scholar]
- Glasgow R. E., Emmons K. (2007). How can we increase translation of research into practice? Types of evidence needed. Annual Review of Public Health, 28, 413–433. [DOI] [PubMed] [Google Scholar]
- Gottfredson D. C., Cook T. D., Gardner F. E. M., Gorman-Smith D., Howe G. W., Sandler I. N., Zafft K. M. (2015). Standards of evidence for efficacy, effectiveness, and scale-up research in prevention science: Next generation. Prevention Science, 16(7), 893–926. 10.1007/s11121-015-0555-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Greenhalgh T., Robert G., Macfarlane F., Bate P., Kyriakidou O. (2004). Diffusion of innovations in service organizations: Systematic review and recommendations. The Milbank Quarterly, 82(4), 581–629. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Haegerich T. M., David-Ferdon C., Noonan R. K., Manns B. J., Billie H. C. (2017). Technical packages in injury and violence prevention to move evidence into practice: systematic reviews and beyond. Evaluation Review, 41(1), 78–108. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hallfors D., Pankratz M., Hartman S. (2007). Does federal policy support the use of scientific evidence in school-based prevention programs? Prevention Science, 8(1), 75–81. 10.1007/s11121-006-0058-x [DOI] [PubMed] [Google Scholar]
- Hawkins J. D., Jenson J. M., Catalano R. F., Fraser M. W., Botvin G. J., Shapiro V., Brown C. H., Beardslee W., Brent D., Leslie L. K., Rotheram-Borus M. J., Shea P., Shih A., Anthony E., Haggerty K. P., Bender K., Gorman-Smith D., Casey E., Stone S. (2015). Unleashing the power of prevention. NAM perspectives (Discussion paper). https://nam.edu/perspectives-2015-unleashing-the-power-of-prevention/
- Horne C. S. (2017). Assessing and strengthening evidence-based program registries’ usefulness for social service program replication and adaptation. Evaluation Review, 41(5), 407–435. [DOI] [PubMed] [Google Scholar]
- Katz J., Wandersman A. (2016). Technical assistance to enhance prevention capacity: A research synthesis of the evidence base. Prevention Science, 17, 417–428. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kemp L. (2016). Adaptation and fidelity: A recipe analogy for achieving both in population scale implementation. Prevention Science, 17(4), 429–438. 10.1007/s11121-016-0642-7 [DOI] [PubMed] [Google Scholar]
- Leeman J., Calancie L., Hartman M. A., Escoffery C. T., Herrmann A. K., Tague L. E., Moore A. A., Wilson K. M., Schreiner M., Samuel-Hodge C. (2015). What strategies are used to build practitioners’ capacity to implement community-based interventions and are they effective? A systematic review. Implementation Science, 10, 80. 10.1186/s13012-015-0272-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lipsey M. W. (2018). Effective use of the large body of research on the effectiveness of programs for juvenile offenders and the failure of the model programs approach. Criminology and Public Policy, 17, 189–198. [Google Scholar]
- Means S., Magura S., Burkhart B. R., Schroter D. C., Coryn C. L. S. (2015). Comparing rating paradigms for evidence-based program registers in behavioral health: Evidentiary criteria and implications for assessing programs. Evaluation and Program Planning, 48, 100–116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mendel P., Meredith L. S., Schoenbaum M., Sherbourne C. D., Wells K. B. (2008). Interventions in organizational and community context: A framework for building evidence on dissemination and implementation in health services research. Administration and Policy in Mental Health, 35(1–2), 21–37. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mihalic S., Fagan A. A., Argamaso S. (2008). Implementing the life skills training drug prevention program: Factors related to implementation fidelity. Implementation Science, 3, 5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Milat A. J., Bauman A., Redman S. (2015). Narrative review of models and success factors for scaling up public health interventions. Implementation Science, 10, 113. 10.1186/s13012-015-0301-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- National Research Council and Institute of Medicine. (2009). Preventing mental, emotional, and behavioral disorders among young people: Progress and possibilities. National Academies Press. [PubMed] [Google Scholar]
- Neta G., Glasgow R. E., Carpenter C. S., Grimshaw J. M., Rabin B. A., Fernandez M. E., Brownson R. C. (2015). A framework for enhancing the value of research for dissemination and implementation. American Journal of Public Health, 105, 49–57. 10.2105/AJPH.2014.302206 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Neuhoff A., Axworthy S., Glazer S., Berfond D. (2015). The what works marketplace: Helping leaders use evidence to make smarter choices. The Bridgespan Group. https://www.bridgespan.org/bridgespan/Images/articles/the-what-works-marketplace/the-what-works-marketplace.pdf [Google Scholar]
- Neuhoff A., Loomis E., Ahmed F. (2017). What’s standing in the way of the spread of evidence-based programs? The Bridgespan Group. https://www.bridgespan.org/bridgespan/Images/articles/spread-of-evidence-based-programs/spreading-evidence-based-programs.pdf [Google Scholar]
- Nilsen P. (2015). Making sense of implementation theories, models and frameworks. Implementation Science, 10, 53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Novins D. K., Green A. E., Legha R. K., Aarons G. A. (2013). Dissemination and implementation of evidence-based practices for child and adolescent mental health: A systematic review. Journal of the American Academy of Child and Adolescent Psychiatry, 52(10), 1009–1025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Paulsell D., Thomas J., Monahan S., Seftor N. S. (2017). A trusted source of information: How systematic reviews can support user decisions about adopting evidence-based programs. Evaluation Review, 41(1), 50–77. [DOI] [PubMed] [Google Scholar]
- Rhoades B. J., Bumbarger B. K., Moore J. E. (2012). The role of a state-level prevention support system in promoting high-quality implementation and sustainability of evidence-based programs. American Journal of Community Psychology, 50, 386–401. [DOI] [PubMed] [Google Scholar]
- Rohrbach L. A., Grana R., Sussman S., Valente T. W. (2006). Type II translation: Transporting prevention interventions from research to real-world settings. Evaluation and the Health Professions, 29(3), 302–333. [DOI] [PubMed] [Google Scholar]
- Sandler I., Wolchik S. A., Cruden G., Mahrer N. E., Ahn S., Brinks A., Brown C. H. (2014). Overview of meta-analyses of the prevention of mental health, substance use, and conduct problems. Annual Review of Clinical Psychology, 10, 243–273. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Spoth R. L., Rohrbach L. A., Greenberg M. T., Leaf P., Brown C. H., Fagan A. A., Catalano R. F., Pentz M. A., Sloboda Z., Hawkins J. D., & Society for Prevention Research Type 2 Translational Task Force. (2013). Addressing core challenges for the next generation of Type 2 translation research and systems: The translation science to population impact (TSci Impact) framework. Prevention Science, 14, 319–351. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tabak R. G., Khoong E. C., Chambers D., Brownson R. C. (2012). Bridging research and practice: Models for dissemination and implementation research. American Journal of Preventive Medicine, 43(3), 337–350. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tolan P. H. (2014). Making and using lists of empirically tested programs: Value for violence interventions for progress and impact. In The evidence for violence prevention across the lifespan and around the world: Workshop summary (pp. 94–106). National Academies Press. https://www.blueprintsprograms.org/publications/Tolan.pdf [Google Scholar]
- Wandersman A., Duffy J., Flaspohler P. D., Noonan R. K., Lubell K., Stillman L., Blachman M., Dunville R., Saul J. (2008). Bridging the gap between prevention research and practice: The Interactive Systems Framework for dissemination and implementation. American Journal of Community Psychology, 41(3–4), 171–181. [DOI] [PubMed] [Google Scholar]
- Weiss C. H., Murphy-Graham E., Petrosino A., Gandhi A. G. (2008). The fairy godmother and her warts: Making the dream of evidence-based policy come true. American Journal of Evaluation, 29(1), 29–47. [Google Scholar]
- Welsh B. C., Sullivan C. J., Olds D. L. (2010). When early crime prevention goes to scale: A new look at the evidence. Prevention Science, 11, 115–125. [DOI] [PubMed] [Google Scholar]
