Abstract
Background and aim:
U.S. state governments have the responsibility to regulate and license behavioral healthcare interventions, such as for addiction and mental illness, with increasing emphasis on implementing evidence-based programs (EBPs). A serious obstacle to this is lack of clarity or agreement about what constitutes “evidence-based.” The study’s purpose was to determine the extent to which and in what contexts web-based Evidence-based Program Registries (EBPRs) are referenced in state government statutes and regulations (“mandates”) concerning behavioral healthcare. Examples are: What Works Clearinghouse; National Register of Evidence-based Programs and Practices; Cochrane Database of Systematic Reviews.
Methods:
The study employed the Westlaw Legal Research Database to search for 30 known EBPR websites relevant to behavioral healthcare within the statutes and regulations of all 50 states.
Results:
There was low prevalence of EBPR references in state statutes and regulations pertaining to behavioral healthcare; 20 states had a total of 33 mandates that referenced an EBPR. These mandates usually do not rely on an EBPR as the sole acceptable source for classifying a program or practice as “evidence-based.” Instead, EBPRs were named in conjunction with internal state or external sources of information about putative program effectiveness, which may be less valid than EBPRs, to determine what is “evidence-based.”
Conclusion:
Greater awareness of scientifically - based EBPRs and greater understanding of their advantages need to be fostered among state legislators and regulators charged with making policy to increase or improve the use of evidence-based programs and practices in behavioral healthcare in the U.S.
INTRODUCTION
Background
The provision of behavioral healthcare in the United States involves significant expenditures of financial and human resources. Mental health expenditures, which include addiction treatment and prevention across the U.S. were $225 billion in 2019 (Open Minds, 2021). The provision of these services occurs within a complex ecology of agencies and political entities, including federal, state, and local government agencies.
The responsibility for overseeing the provision of behavioral healthcare in the U.S. typically falls on state governments (Bruns, et al., 2008; Fagan, et al., 2019; Hallfors & Cho, 2007; Reynolds & Ramakrishnan, 2018; Rieckmann, et al., 2015; Substance Abuse and Mental Health Service Administration [SAMHSA], 2021; White House. 2021). Federal authorities such as the White House (2021), SAMHSA (2021) and the Centers for Disease Control, as well as private organizations such as Pew-MacArthur (2014) and the Arnold Foundation, have been encouraging states to take a more evidence-based approach to supporting treatment and prevention programs among service provider organizations. This approach can be conceptualized as evidence-based decision-making.
Evidence-based decision making in government can be understood as the systematic use of empirical evidence to guide decisions with the intent to improve the common good (Malekinejad et al., 2018, SAMHSA, 2021). In the behavioral health disciplines, evidence-based decision-making typically focuses on the identification and implementation of effective programs and practices delivered to those needing services (Freund et al., 2019). In brief, evidence-based decision-making involves the selection and implementation of programs, practices, or policies that research has shown to work. This decision-making process is not perfect, however, and there are some criticisms of the use of evidence-based programs that may reduce the likelihood of adoption of evidence-based programs and practices.
Critiques of EBPs
Despite the call for increased use of EBPs, there are issues that may prevent their use. First, a clear meaning of evidence-based practice is lacking, and there are different standards for determining “what works” (Neuhoff, Axworthy, Glazer, & Berfond, 2015; Parrish, 2018; Stephenson, 2017; Westbrook, Avellar, & Seftor, 2017). Program success is dependent on contextual factors and outcomes addressed, yet EBPs may not be responsive to these variations in context (Neuhoff et al., 2015; Walker, Lyon, Aos, & Trupin, 2017). Many interventions show mixed results or results may be misleading, even when rigorous designs are used (Neuhoff et al., 2015; Stephenson, 2017). Lastly, decision-makers often do not get information about EBPs in a systematic manner (Neuhoff et al., 2015). The presence of these issues mean that decision-makers and policy-makers need trusted sources of systematically derived information about EBPs.
As part of this process, policy makers are turning to syntheses of research evidence for specific programs, clinical practices or models (Malekinejad et al., 2018). Unfortunately, these policy-makers and their staff may not have sufficient expertise in research or evaluation to understand how to adequately evaluate the evidence in support of specific programs or practices (Bastian et al., 2010; Malekinejad et al., 2018; Maranda, et al., 2021). Thus, they would benefit from expert guidance to assist with the policy and programming decision-making process, especially when it comes to identifying what constitutes an EBP.
One resource for identifying EBPs are evidence-based program registries (EBPRs) for behavioral health programs and practices, also known variously as “evidence-based clearinghouses”, “evidence-based program databases”, or “evidence-based resource centers”. For the purposes of the present study, EBPRs may be defined as searchable, web-based collections of behavioral health programs and practices that summarize research and evaluation studies to make determinations of the evidence-base supporting those programs and practices (Burkhardt, Schröter, Magura, Means, & Coryn, 2015). In order to understand the need for EBPRs, one must also understand the purposes and uses of EBPRs, their scope of practice, the structures and methodologies used by EBPRs to make judgements about programs that work, the dissemination strategies used by EBPRs, and the critiques of EBPRs that may inhibit their use.
Purposes and uses of EBPRs
Evidence-based practice registers may be created in response to legislative mandates, to support funding decisions, or to inform decision makers about which programs should be implemented (Buckley, Fagan, Pampel, & Hill, 2020; Burkhardt et al., 2015; Horne, 2017; Means, Magura, Burkhardt, Schröter, & Coryn, 2015; Westbrook et al., 2017). They may also be used by decision-makers to validate programs that have already been implemented (Burkhardt et al., 2015; Walker et al., 2017). Additionally, they may serve as means of dissemination of information about EBPs (Buckley et al., 2020; Paulsell, Thomas, Monahan, & Seftor, 2017; Walker et al., 2017). However, Maranda et al. (2021) identified a lack of penetration of EBPRs in state behavioral health agency websites. This may be due to lack of awareness of EBPRs on the part of regulatory agencies, but may also reflect recognition that EBPRs have weaknesses in addition to advantages -for decision-makers (see below).
Scope of EBPRs
Evidence-based registers vary in scope and sponsorship, including EBPRs sponsored by state governments, the federal government, private, not for profit foundations, and international non-governmental organizations (Burkhardt et al., 2015; Horne, 2017; Means et al., 2015). The commitment of the U.S. federal government to this approach to evidence-based programming is indicated by its administration or funding of at least 15 EBPR sites (Horne, 2017). They cover topics including social services, education, public health, child welfare, mental health and substance abuse, and criminal justice and re-entry programs, among others (Horne, 2017; Neuhoff et al., 2015). Users of EBPRs may include researchers, policy-makers, administrators, program operators/practitioners, evaluators, advisers who guide decision-makers and the general public (Neuhoff et al., 2015; Paulsell et al., 2017; Westbrook et al., 2017).
Structure, methodologies, and standards of evidence of EBPRs
EBPR websites compile existing research information about individual programs, program models, and/or clinical practices, with the intention of identifying interventions that are supported by research-based evidence (Burkhardt et al., 2015; Horne, 2017). They typically provide program descriptions, formal ratings or other assessments of program effectiveness, consideration of the quality of the research supporting an intervention, and, in some cases, implementation assistance including links to trainings, program manuals and other relevant resources (Burkhardt et al., 2015; Hallfors & Cho, 2007; Horne, 2017; Jossie, 2019; Maranda et al., 2021; Means et al., 2015; Mihalic & Elliott, 2015; Petrosino, 2014; Westbrook et al., 2017).
Although the particular criteria used by each EBPR may vary, their ratings of interventions are predicated on well-accepted hierachies of evidence that maxmize internal validity through the use of rigorous research designs such as randomized controlled trials or quasi- experimental designs (Burkhardt et al., 2015; Horne, 2017; Test, Kemp-Inman, Diegelmann, Hitt, & Bethune, 2015; Walker et al., 2017; Westbrook et al., 2017). These ratings may also vary in the types of judgements rendered, which include simple include/exclude-type ratings that indicate whether an intervention is evidence-based or not or multi-tiered rating systems that grade an intervention based on its documented level of evidence (Burkhardt et al., 2015; Means et al., 2015). It is important to note that not all EBPRs produce ratings of effectiveness – some only rate the quality of evidence and let the user decide which programs are the most efficacious for their particular clinical context (Westbrook, 2017). Finally, EBPRs may vary in the way they summarize evidence. For example, they may produce a judgement based on several dimensions of evidence or only a single summary rating (Neuhoff et al., 2015).
Dissemination strategies for EBPRs
Information about EBPRs may be disseminated through publicly available websites, consultations with experts, at research meetings, at professional conferences, through formal publications and reports, through grant solicitations, and through subscription lists (Burkhardt et al., 2015; Westbrook et al., 2017).
Critiques of EBPRs
Several relevant criticisms of EBPRs have been identified in the literature. First, some authors have noted that EBPRs often differ in the criteria and standards they use to judge evidence of effectiveness and/or in the rigor with which they apply those criteria and standards (Buckley et al., 2020; Burkhardt et al., 2015; Macklem, 2020; Means et al., 2015; Neuhoff et al., 2015; Stephenson, 2017; Test et al., 2015; Walker et al., 2017; Westbrook et al., 2017; Zack, Karre, Olson, & Perkins, 2019). Second, EBPRs often do not include information necessary for implementation of programs, such as program costs, required staffing, and readiness for dissemination (Buckley et al., 2020; Horne, 2017; Macklem, 2020; Neuhoff et al., 2015; Paulsell et al., 2017). Third, EBPRs may lack information about external validity, transportability of programs across contexts, and fit between programs and clinical contexts. Specifically, interventions that work under controlled research conditions may not be as effective in routine, “real world” implementation (Haynes, 1999; Horne, 2017; Walker et al., 2017; Westbrook et al., 2017). Fourth, EBPRs do not typically allow for qualitative research or case studies, which in some cases can provide valid evidence in favor of program adoption (Broeder & Donze, 2010; Green & Britten, 1998; Grypdonck, 2006; Williams, Boylan, & Nunan, 2019). Fifth, there is concern about the timeliness of generating or updating evidence reviews (Burkhardt et al., 2015; Means et al., 2015) Finally, ratings of individual interventions may differ across registers, and the terminology used to describe those ratings may vary across EBPRs as well (Buckley et al., 2020; Macklem, 2020; Means et al., 2015; Stephenson, 2017; Walker et al., 2017; Zack et al., 2019).
Despite these criticisms, however, authors have cited EBPRs as important sources of information for decision-makers about “what works” and motivators for increasing the use of evidence-based standards in clinical practice (Buckley et al., 2020; Burkhardt et al., 2015; Paulsell et al., 2017; Zack et al., 2019). Horne (2017:422) notes: “The evidence summaries published by the various evidence-based program registries reflect an enormous amount of work and a genuine contribution to the decision-making capacity of service providers.” The usefulness of these evidence summaries also extends to other decision- makers in the behavioral health disciplines.
Contribution of the present study
In order to adequately engage in evidence-based decision-making, service provider agencies need support from state governments. One of the major ways in which state governments can support evidence-based decision-making in behavioral health is through statutory and regulatory mandates (Beidas et al., 2016; Bruns and Hoagwood, 2008; Fagan, et al., 2019; Hallfors & Cho, 2007; Rieckmann, et al., 2011; Rieckmann et al., 2015). For these mandates to be effective, they must clearly communicate expectations concerning the implementation of evidence-based programs (EBPs) and they must provide access to information about EBPs or direct the provider agencies to relevant information (Pew-MacArthur, 2017; Results for America, 2018). In other words, the mandates need to set clear parameters around what constitutes an EBP that is acceptable for government funding and implementation. One potential way of accomplishing this objective is through the use of EBPRs in state mandates and resultant policies.
Despite the potential benefit to state goverments by using EBPRs, Maranda et al. (2021) found a general lack of depth and breadth of EBPR coverage on state agency websites. That study concluded that increased use of EBPRs - instead of relying solely on internally generated critieria and standards for identifying EBPs - could produce increased efficiency and reduced cost to states in EBP implementation. Despite this lack of representation of EBPRs on state government websites, there has been no research on the degree to which EBPRs are included in state statutes and regulations relating to behavioral healthcare. The present study is the first that seeks to better understand how states are using or encouraging the use of EBPRs in their mandates for EBP use. The study will also make a contribution to implementation science, in that external policies and regulations are a key component of the “outer setting” for successful EBP implementation (Damschroder et al. (2009).
Research questions
The present study seeks to document how often and in what ways state governments reference EBPRs in statutes and regulations relevant to behavioral healthcare. The research questions are:
To what extent are EBPR websites referenced in U.S. state government mandates regarding evidence- based programs and practices in behavioral healthcare?
Why and how are EBPR websites referenced in state mandates, i.e., for which service areas, for what purposes, and in what contexts?
Do states vary in how often they reference EBPR websites in their mandates?
METHODS
The present study addresses the research questions by identifying the prevalence, purpose, and contexts of references to EBPRs in state statutes and regulations. It represents a study of the entire population of mandates that mention any existing EBPRs, similar to a study which examined state comprehensive planning statutes that address physical activity (Charon et al. (2021).
In this study the term “statutes’ refer to laws enacted by state legislatures and “regulations” refers to administrative codes enacted by executive bodies (state agencies) to implement statutes. Although both statutes and regulations carry the force of law, statutes tend to be less specific than regulations because they set the general direction of state government. On the other hand, regulations tend to be more specific because they direct the activities of government (Taylor, 2021). Despite this difference, both carry more weight than the informal policies of state agencies. In addition, both are more difficult to change than informal policies. We will refer to statutes and regulations collectively as “mandates.” (Note that although judicial mandates also exist, they remain outside the scope of this study.)
The present study used a mixed methods approach, wherein the researchers used open coding of the statutes and regulations to identify a set of codes to classify the context and purposes of references to EBPRs and then applied those codes to the data as part of a quantitative analysis of the presence of the identified themes.
Data source
The study utilized the Westlaw Legal Research Database (2021) to search for mentions of EBPRs in state mandates. Westlaw is a leading proprietary legal database available on-line and is used for legal research by lawyers and scholars. The search included 28 federally and non-federally sponsored EBPRs that were identified in prior research (Maranda et al., 2021) as well as two additional databases meeting the definition of EBPR: the JBI EBP Database (https://jbi.global/ebp) and the National Governors Association Center for Best Practices (https://www.nga.org/bestpractices/). The latter two include but are not limited to behavioral healthcare. The Westlaw search was conducted in January 2021 and included all 50 states. The search string utilized both the official names of the targeted 30 EBPRs, as well as commonly used abbreviations and variants of their official names.
Review and coding of identified mandates
An electronic search of Westlaw was conducted for mentions of any of the 30 EBPRs. For any such “hit,” the corresponding mandate was reviewed to determine whether its primary focus was related to the topic of behavioral health; if so, the study considered it a valid reference. Any non-relevant mandates were excluded from the analysis.
For this study, the term behavioral health intervention is defined as any program, practice, training, source of standards of evidence, or source of information that focuses on changing behaviors related to substance misuse, mental health, child welfare, certain aspects of the criminal justice and education systems with behavioral change components (i.e., offender rehabilitation, dropout prevention, etc.), or general healthcare that would also apply to behavioral health although unspecified. It should be noted that several EBPRs included in the present study, such as Cochrane (2021) and the JBI EBP Database, have a primary focus on medical treatments, but also include interventions related to behavioral healthcare.
A coding scheme was developed using an open coding process, where the researchers reviewed the texts of the mandates of interest and developed codes based on the content of that text. Two senior researchers participated in coding each mandate, with one conducting the initial coding and the second reviewing the coding for agreement. Any conflicts or difficult to code mandates were discussed between the coders until a consensus was reached, although the data elements were straightforward to the degree that few initial disagreements occurred. The identified mandates were coded for the following items: state to which the mandate pertained, the name(s) of the EBPR referenced, the topic/service area of the mandate, the purpose of the reference, and the context of the of reference. The categories used for purpose of the reference appear in Table 1.
Table 1:
Examples of the purposes of EBPR references in statutes and regulations.
| Purpose of EBPR reference | Examples |
|---|---|
| Source of evidence-based interventions |
1st
Example: “…using an instructional program found to be effective by the
What Works Clearinghouse
of the Institute of Education Sciences.” 2nd Example: “The program is included in the United States substance abuse and mental health services administration’s National Registry of Evidence-Based Programs and Practices (NREPP)” |
| Source of evidence-based trainings | “the secretary shall consider…training programs listed on the best practices registry of the American Foundation for Suicide Prevention and the Suicide Prevention Resource Center.” |
| Provides evidence of effectiveness of specific program(s) | “Peer-reviewed published scientific study means that a study has been cited by the Cochrane Review, the Institute of Medicine, or PubMed Central.” |
| Source of evidence-based standards or guidelines | “Examples of professional standards [include]…B. the United States Department of Health and Human Services, Substance Abuse and Mental Health Services Administration (SAMHSA) National Registry of Evidence-Based Programs and Practices” |
| Source of evidence-based clinical assessments | “ ‘Best Practice Risk Assessment’ means…tools accepted for the… National Registry of Evidence-based Programs and Practices…” |
Note: The EBPRs are underlined. The wording of the mandates is abbreviated, but not changed.
The study identified the following six categories of context for EBPR references:
EBPR referenced along with alternative external source(s) for “evidence-based.”
The specified EBPR may be used to justify a judgment of “evidence-based,” but there are other specific authorities or specific sources of information external to state government that may be used as justification instead of the EBPR.
EBPR(s) referenced as source(s) of “evidence-based” without additional criteria.
One or more EBPRs are used as the basis for judgment of what constitutes an EBP, without any additional criteria being considered.
EBPR referenced along with alternative internal criteria for “evidence-based.”
The specified EBPR may be used to justify a judgment of “evidence-based,” but there are other specific authorities or specific sources of information internal to or developed by state government that may be used as justification instead of the EBPR.
EBPR referenced along with non-specific external source(s) for “evidence-based.”
The specified EBPR may be used to justify a judgment of “evidence-based,” but this is expressed as a preference, and the mandate implies other external sources may be used instead of the EBPR.
EBPR referenced along with alternative internal and external source(s) for “evidence-based.”
The specified EBPR may be used to justify a judgment of “evidence-based,” but there are other specific authorities or specific sources of information either internal or external to state government that may be used as justification instead of the EBPR.
EBPR referenced along with added internal criteria for “evidence-based.”
The specified EBPR may be used as part of a judgment of “evidence-based,” but there are additional internal criteria (i.e., additional requirements) that must be met in addition to use of the EBPR.
Verbatim abbreviated examples of these different contexts are shown in Table 2 to illustrate the concrete meaning of the categories.
Table 2:
Examples of the context of EBPR references in statutes and regulations.
| Context | Examples |
|---|---|
| EBPR referenced along with alternative external source(s) for “evidence-based” |
1st
example: “‘Peer-reviewed published scientific study’ means that a study has been cited by the
Cochrane Review, the Institute of Medicine, or PubMed Central.” 2nd example: “Be evidence based, as demonstrated by meeting one of the following criteria: a. The service shall be included as an evidence-based mental health and substance use disorder intervention on the SAMHSA National Registry of Evidence-Based Programs and Practices (NREPP), b. The services shall be published in a peer-reviewed journal and found to have positive effects; or…” |
| EBPR(s) referenced as source(s) of “evidence-based” without additional criteria | “ ‘Best Practice Risk Assessment’ means a research-informed methodology that…may include tools…accepted for the Substance Abuse and Mental Health Services Administration National Registry of Evidence-based Programs and Practices or the Suicide Prevention Resource Center Best Practices” Registry. |
| EBPR referenced along with alternative internal criteria for “evidence-based” | “ ‘Evidence of effectiveness’ means documented results of evaluation assessing the effect of the program …[including]…results of program evaluation conducted in the jurisdiction or an evidence rating developed by matching the program to available research using a nationally recognized clearinghouse of program evaluations, such as those included in the Pew-MacArthur Results First Clearinghouse Database.” |
| EBPR referenced along with non-specific external source(s) for “evidence-based” |
“The standards for the JCD Grant Program shall be-- (A) Preference will be given to programs that are consistent with the evidence-based and promising-practices approach described in the Office of Juvenile Justice and Delinquency Prevention’s Model Programs Guide…” |
| EBPR referenced along with alternative internal and external source(s) for “evidence-based.” | “In developing the list of best practice-based programs and research-based practices, the agency and the Health and Human Services Commission shall consider: (1) any existing suicide prevention method developed by a school district; and (2) any Internet or online course or program developed in this state or another state that is based on best practices recognized by the Substance Abuse and Mental Health Services Administration or the Suicide Prevention Resource Center.” |
| EBPR along with added internal criteria for “evidence-based” |
“ ‘Suicide Prevention and Intervention Training’ means full-day training based on national guidelines for suicide prevention and best practice (from the
Suicide Prevention Resource Center
and the American Foundation for Suicide Prevention)… 1. Provides the fundamentals of suicide prevention and current information; 2. Supports early identification and referral of potentially suicidal people; and…” |
Note: EBPRs are underlined. The wording of the mandates is abbreviated.
Analysis
Descriptive statistics concerning the population of mandates that featured references to EBPRs were obtained for all relevant variables included in the analysis by using SPSS version 27. Given that the mandates represented the entire population, no inferential statistics were calculated.
RESULTS
Research question #1: To what extent are EBPRs referenced in state government mandates relating to behavioral health?
The study identified 33 unique mandates that referenced an EBPR, representing 20 of the 50 state governments (40%). Fifteen of these documents were statutes (45.5%) and 18 were regulations (54.5%). The earliest statute was enacted January 5, 2012, and the latest was August 1, 2020. It is more difficult to determine when regulations become enacted, because they are often amended multiple times. For the purposes of this study, regulations are considered enacted on their last amendment date, as that amendment represents the mandate in its current form. The earliest regulation was enacted September 15, 2011 and the latest was July 2, 2020.
Almost all mandates referenced a single EBPR (n=30, 90.9%) and a few referenced multiple EBPRs (n=3, 9.1%). There were a total of 40 instances where an EBPR was referenced in a mandate (multiple references to the same EBPR in the same mandate were counted once). In all, 11 out of the 30 EBPRs (36.7%) searched for in the present study were represented in these mandates at least once. The distribution of specific EBPRs represented appears in Table 3. There were 6 service areas represented among the included mandates, with some mandates representing multiple service areas. The most common service areas represented by mandates with a reference to an EBPR were mental health (54.5% of mandates) and substance abuse (39.4% of mandates). The distribution of service areas addressed by the mandates referencing EBPRs are shown in the top section of Table 4.
Table 3:
Number of times specific EBPRs are referenced in state government mandates for evidence-based programming.
| N | % | |
|---|---|---|
| Suicide Prevention Resource Center (SPRC)a,b | 11 | 27.5 |
| National Register of Evidence-based Programs and Practices (NREPP)c,d | 8 | 20.0 |
| Evidence-based Practice (EBP) Resource Centere | 4 | 10.0 |
| Cochranef | 3 | 7.5 |
| Home Visiting Evidence of Effectiveness (HomVee)g | 3 | 7.5 |
| What Works Clearinghouseh | 3 | 7.5 |
| Effective Child Therapy: Evidence-based Mental Health Treatment for Children and Adolescentsi | 2 | 5.0 |
| Blueprints for Heathy Youth Developmentj | 2 | 5.0 |
| Results First Clearinghouse Databasek | 2 | 5.0 |
| California Evidence-based Clearinghouse for Child Welfare (CEBC)l | 1 | 2.5 |
| Office of Juvenile Justice and Delinquency Prevention (OJJDP) Model Programs Guidem | 1 | 2.5 |
| Total references to EBPRsn | 40 | 100.0 |
NOTE: This table represents 11 EBPRs found during the search. 17 EBPRs had 0 references and are omitted from this table.
The SPRC continues to list effective programs based on its former Best Practices Registry, which is no longer being updated, but most of the site is devoted to other supports for EBP such as trainings and technical assistance.
University of Oklahoma Health Sciences Center
NREPP was suspended in Jan. 2018 and replaced by the Evidence-based Practices (EBP) Resource Center. NREPP ratings are still accessible at the Results First Clearinghouse Database.
Substance Abuse and Mental Health Services Administration
Substance Abuse and Mental Health Services Administration
Cochrane Database of Systematic Reviews
Administration for Children and Families
Institute for Education Sciences
American Psychological Association Division 53
University of Colorado Boulder Institute of Behavioral Science
Pew Charitable Trusts
Rady Chilren’s Hospital San Diego
Office of Juvenile Justice and Delinquency Programs
Number of references to EBPRs exceeds number of mandates because a single mandate may reference more than one EBPR.
Table 4:
Characteristics of references to EBPR websites in state government mandates for evidence-based programming.
| N | % | |
|---|---|---|
| Service area(s) of the mandate | ||
| Mental health | 18 | 54.5 |
| Substance misuse | 13 | 39.4 |
| Education | 6 | 18.2 |
| Child welfare | 4 | 12.1 |
| Juvenile justice | 1 | 3.0 |
| General behavioral health | 1 | 3.0 |
| Total number of mandatesa | 33 | |
| Purpose of the EBPR website reference | ||
| EBPR is source of evidence-based programs or practices | 17 | 51.5 |
| EBPR is source of evidence-based staff trainings | 8 | 24.2 |
| EBPR provides evidence of effectiveness of specific program(s) | 5 | 15.2 |
| EBPR is source of evidence-based standards or guidelines | 3 | 9.1 |
| EBPR is source of an evidence-based clinical assessment | 1 | 3.0 |
| Total number of mandatesb | 33 | |
| Context of the EBPR website reference | ||
| EBPRs referenced along with… | ||
| Alternative external source(s) for “evidence-based” | 17 | 51.5 |
| As source(s) of “evidence-based” without additional criteria | 8 | 24.2 |
| Alternative internal criteria for “evidence-based” | 3 | 9.1 |
| Non-specific external source(s) for “evidence-based” | 2 | 6.1 |
| Alternative internal and external source(s) for “evidence-based” | 2 | 6.1 |
| Added internal criteria for “evidence-based” | 1 | 3.0 |
| Total number of mandatesc | 33 | 100.0 |
Percentages add to more than 100% because a single mandate may mention more than one service area for an EBPR.
Percentages add to more than 100% because a single mandate may mention more than one purpose for an EBPR.
Categories are mutually exclusive.
Research question #2: Why and how are EBPRs referenced in state mandates?
The most common purposes for referencing an EBPR (in order of frequency) was as a source of evidence-based interventions (51.5% of mandates), as a source of evidence-based trainings (24.2% of mandates), and as a source of evidence of effectiveness of specific interventions (15.2% of mandates). This last category represents the need for decision makers to gain information about the evidence-base for an existing program (i.e., to make benchmark comparisons or vet existing interventions). The full distribution of reference purposes appears in the middle portion of Table 4.
The context of the EBPR reference indicates that these mandates usually do not rely on an EBPR as a sole acceptable criterion for classifying an intervention as being “evidence-based.” Instead, EBPRs are used in conjunction with either (or both) internal state or external sources of information to render judgments about what is an “evidence-based” intervention and what is not. The most common criterion is to accept a specific EBPR’s judgment of “evidence based,” but also to accept the judgments of one or more external authorities or sources of information about what is “evidence-based” (for 51.5% of the mandates). In addition, a smaller number of mandates allow alternatives from internal state authorities (9.1%) or from unspecified external authorities or sources of information (6.1%) to classify an intervention as evidence-based. In total, only 8 (24.2%) of the mandates relied on the judgment of a single or multiple EBPR websites as the sole acceptable criterion for classifying an intervention as “evidence based.” The bottom portion of Table 4 includes the distribution of all contexts for EBPR references.
Research question 3: Do states vary in how often they reference EBPR websites?
The states clearly varied in how often they reference EBPR websites. As stated above only 20 states reference any EBPR websites. There was variation in the number of mandates mentioned by state: 14 states (70% of states) had 1 mandate that referenced an EBPR, 3 states (15%) had 2 mandates, 1 state (5%) had 3 mandates, 1 state (5%) had 5 mandates, and 1 state (5%) had 6 mandates. The median number of mandates with mentions of an EBPR website across all states was 1 mandate per state. There was also variation in the number of EBPRs referenced per mandate: 31 mandates (91.2% of mandates) referenced 1 EBPR, 1 mandate (2.9%) referenced 2 EBPRs, and 2 mandates (5.9%) referenced 4 EBPRs. The mean number of EBPRs mentioned in mandates across all states was 1.2 per state and the number of EBPRS mentioned per mandate ranged from 1 to 4. Pennsylvania mentioned the most EBPR websites in a mandate (both Pennsylvania mandates mentioned 4 EBPR websites).
DISCUSSION
Nationally, the study found very few U.S. state government mandates related to behavioral healthcare that referenced specific EBPRs. In a separate examination of 8 selected states, we found over 700 regulations and statutes related to behavioral health that mentioned terms such as “evidence-based,” “research-based,” “best practices” and similar terms (Maranda et al., 2022). This indicates that the representation of EBPRs in state mandates is low as compared with the amount of attention that is paid to evidence-based issues generally. Only three EBPRs – the Suicide Prevention Resource Center ([SPRC], University of Oklahoma Health Sciences Center, n.d.), the National Registry of Evidence-based Programs and Practices ([NREPP], Substance Abuse and Mental Health Services Administration, n.d.), and the EBP Resource Center (Substance Abuse and Mental Health Services Administration, n.d.) - account for a majority (58%) of the mentions. It should be noted that the EBP Resource Center, which is sponsored by SAMHSA, is the resource website that is intended to replace NREPP.
The low representation of EBPRs in state mandates points to an underutilization of this potentially valuable and free resource. This finding parallels the findings of Maranda et al. (2021) who found similar restrictions in the range of EBPRs referenced and the number of references to EBPRs generally on state behavioral health department websites. Considerable amounts of scholarly effort, funding (at the federal, state, and foundation levels), and other resources are spent on establishing and maintaining EBPRs. Despite this, the findings of the present study suggest that state legislators and regulators only know of a few (or only decided to use a few) EBPRs in developing their mandates.
Although the exact explanation of this general lack of EBPR representation in state mandates is not discernable from these results and may involve political considerations beyond the scope of our study (Yingling & Mallinson, 2020), it does appear that the sponsorship by SAMHSA of the three most referenced EBPRs may explain their greater use in state mandates; SAMHSA is the main federal funding and regulatory agency for mental health and addiction services. This raises some questions. Are these EBPRs referenced more frequently because these are the only ones that are known to most policy makers, or do policy makers simply trust these sites more than others because they are sponsored by SAMHSA? It also might be the case that because these specific EBPRs are sponsored by SAMHSA, their recommendations are harder to dispute.
As expected, the service areas represented in the mandates referencing EBPRs are predominantly mental health and substance misuse, which may also explain the more frequent referencing of SAMHSA-sponsored resources. The purposes of the EBPR references are also predominantly as a source of information about evidence-based programs and clinical practice, although other topic areas that can benefit from an evidence-based approach are also mentioned.
One might expect that, since a primary focus of EBPRs is supporting the selection and implementation of EBPs, and since EBPRs are designed to provide high-quality decision support, EBPRs would be used as the sole acceptable source of evidence-based interventions. However, this only happened in 24.2% of the mandates. In the remaining 75.8% of the mandates, the EBPRs were referenced along with other data sources or criteria for what would be acceptable as “evidence-based.” It is surprising to find a proliferation of alternative definitions of and approaches to determining what would be considered acceptable as “evidence-based” in these state mandates. Not all the alternatives may be entirely unscientific, but they generally lack the rigor of the scientific principles of evaluation research employed by the EBPRs. One must suggest that the primary strength of EBPRs is being overlooked or not sufficiently appreciated by the legislators and the regulators responsible for writing and enacting these state government mandates relating to behavioral healthcare.
It is important to point out that these EBPRs are resources not only for the U.S., but behavioral healthcare agencies internationally. The EBPs reviewed on the sites may be potentially adaptable to other national and cultural contexts. An example of this is the Individual Placement and Support (IPS) model of supported employment, which is listed by many of the EBPRs and has been adapted successfully for use in other countries under varying economic conditions (Modini et al., 2016). Finally, several EPBRs are based outside the U.S. and include special attention to the international program evaluation literature. The locations of these EBPRs include Norway (The Campbell Collaboration, 2021), U.K. (University of York, 2021), and Australia (The University of Adelaide, 2021)
CONCLUSIONS
The substantial contribution of EBPRs to behavioral healthcare is to compile and interpret the evaluation research literature on programs and clinical practices according to accepted scientific principles. This can be a time-consuming and expensive process that may well lie the beyond the expertise and resources of individual clinicians, administrators, service provider agencies and even state government departments (Bastian et al., 2010; Burkhardt et al., 2015; Means et al., 2015). Pew-MacArthur (2017) found that state agencies sometimes hire universities and consultancies to help them in evidence-based decision-making. Numerous federal government agencies and non-profit organizations (Horne, 2017) have invested considerable funds and other resources in establishing and continuing these EBPRs. Thus, by not sufficiently utilizing existing EBPRs, states may be duplicating efforts and incurring unneeded costs. Given that relatively few EBPRs are mentioned in state mandates for EBP use, it may be wise to consolidate some EBPRs that are largely redundant with more well-known EBPRs.
The results suggest that greater awareness of the existence of the pool of scientifically - informed EBPRs and greater understanding of the advantages of EBPRs need to be fostered among state legislators and regulators charged with increasing or improving the use of evidence-based interventions in behavioral healthcare. However, it is also possible that these the limitations of EBPRs discussed earlier may prevent legislators from being more inclusive of EBPRs in the legislation they offer.
We cannot determine from this study how often agencies within the states are actually using EBPRs. However, it is logical to conclude that including requirements to use EBPRs in mandates may increase EBPR use. We are not suggesting that states make EBPRs the sole source of EBPs. We do suggest that it could streamline and improve the legislative process if states included relevant EBPRs in their definitions of what constitutes EBPs.
Increasing the utilization of existing EBPRs by state legislators and regulators, perhaps supplemented by a cross-state peer learning community (Pew Charitable Trusts, 2020), is a clear policy direction for supporting scientifically-vetted interventions, clinician trainings, and program standards in behavioral healthcare. This approach would facilitate the critical tasks of identifying and implementng evidence-based programs and clinical practices by policy makers and service providers.
Limitations
This study is limited to the official policies and practices that are reflected in U.S. state government statutes and regulations. In addition to these official policies, states may make more use of EBPRs through informal administrative practices or funding announcements than is indicated by this research. Court decisions may also influence the use of EBPRs. We did not capture these additional types of EBPR references or usage in the present study.
Additionally, the present study is limited to identifying instances of EBPRs being mentioned in U.S. state legal mandates and could not identify the actual impact of any policy on the use of EBPs in a given state.
Although statutes tend to be more permanent than regulations, both are subject to modification and repeal. Also, new statutes can be passed and new regulations adopted. Thus, the present study represents a snapshot of a moment in time. Future studies may yield different results as mandates are modified, repealed or new ones adopted.
The legal mandate search was limited to the U.S.; the study could not examine the extent to which EBPRs may be referenced in the legislation and regulations of other countries.
Acknowledgments:
Mary Ramlow provided valuable administrative support.
Funding:
The study was funded by grant # R01DA042036 from the National Institute on Drug Abuse.
Footnotes
Consent for publication: All authors have approved the manuscript and consent to its publication.
Competing interests:
The authors certify that they have no affiliations with or involvement in any organization or entity with financial or non-financial interest in the subject matter or materials discussed in this manuscript.
Availability of data and materials:
The coded data will be deposited in the following public archive - National Addiction and HIV Data Archive Program https://www.icpsr.umich.edu/web/pages/NAHDAP/index.html
REFERENCES
- Administration for Children and Families (n.d.). Home Visiting Evidence of Effectiveness. Retreived July 7, 2021, from https://homvee.acf.hhs.gov/
- American Psychological Association Division 53 (n.d.). Effective Child Therapy. 2021. Retreived July 7, 2021, from https://effectivechildtherapy.org/
- Bastian H, Glasziou P, & Chalmers I (2010). Seventy-five trials and eleven systematic reviews a day: How will we ever keep up? PLoS Medicine, 7(9), e1000326. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beidas RS, Stewart RE, Adams DR, Fernandez T, Lustbader S, Powell BJ, … Barg FK (2016). A multi-level examination of stakeholder perspectives of implementation of evidence-based practices in a large urban publicly-funded mental health system. Administration and Policy in Mental Health and Mental Health Services Research, 43, 893–908. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Broeder JL, & Donze A (2010, May 1). The role of qualitative research in evidence-based practice. Neonatal Network, 29(3), 197–202. Springer. 10.1891/0730-0832.29.3.197 [DOI] [PubMed] [Google Scholar]
- Bruns EJ, Hoagwood KE, Rivard JC, Wotring J, Marsenich L, Carter B, & Hamilton JD (2008). State implementation of evidence-based practice for youths, part II: Recommendations for research and policy. Journal of the American Academy of Child and Adolescent Psychiatry, 47(5), 499–504. [DOI] [PubMed] [Google Scholar]
- Bruns EJ, & Hoagwood KE (2008). State implementation of evidence-based practice for youths, part I: Responses to the state of the evidence. Journal of the American Academy of Child and Adolescent Psychiatry, 47(4), 369–373. [DOI] [PubMed] [Google Scholar]
- Buckley PR, Fagan AA, Pampel FC, & Hill KG (2020). Making evidence-based interventions relevant for users: A comparison of requirements for dissemination readiness across program registries. Evaluation Review, 44(1), 51–83. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Burkhardt JT, Schröter DC, Magura S, Means SN, & Coryn CLS (2015). An overview of evidence-based program registers (EBPRs) for behavioral health. Evaluation and Program Planning, 48, 92–99. [DOI] [PMC free article] [PubMed] [Google Scholar]
- The Campbell Collaboration (n.d.). The Campbell Collaboration. Retrieved July 8, 2021, from https://www.campbellcollaboration.org/
- Cochrane Database of Systematic Reviews. (2021). http://www.cochrane.org/.
- Charon LM, Milstein C, Moyers S, Abildso CG, & Chriqui JF (2021) Do state comprehensive planning statutes address physical activity? Implications for rural communities. International Journal of Environmental and Public health, 18(22): 12190. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC (2009). Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 7, 4–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fagan AA, Bumbarger BK, Barth RP, Bradshaw CP, Cooper BR, Supplee LH, & Walker DK (2019). Scaling up evidence-based interventions in US public systems to prevent behavioral health problems: challenges and opportunities. Prevention Science, 20(8), 1147–1168. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Freund M, Zucca A, Sanson-Fisher R, Milat A, Mackenzie L, Turon H (2019). Barriers to the evaluation of evidence-based public health policy. Journal of Public Health Policy, 40, 114–125. 10.1057/s41271-018-0145-9 [DOI] [PubMed] [Google Scholar]
- Green J, & Britten N (1998, April 18). Qualitative research and evidence based medicine. BMJ Clinical Research 316(7139), 1230–1232. 10.1136/bmj.316.7139.1230 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grypdonck MHF (2006). Qualitative health research in the era of evidence-based practice. Qualitative Health Research, 16(10), 1371–1385. 10.1177/1049732306294089 [DOI] [PubMed] [Google Scholar]
- Hallfors D, & Cho H (2007). Moving behavioral science from efficacy to effectiveness. International Journal of Behavioral Consultation and Therapy, 3(2), 236–250. [Google Scholar]
- Haynes B (1999, September 11). Can it work? Does it work? Is it worth it? The testing of healthcare interventions is evolving. British Medical Journal, 319(7211), 652–653. 10.1136/bmj.319.7211.652 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Horne CS (2017). Assessing and strengthening evidence-based program registries’ usefulness for social service program replication and adaptation. Evaluation Review, 41(5), 407–435. [DOI] [PubMed] [Google Scholar]
- Institute for Education Sciences (n.d.). What Works Clearinghouse. 2021. Retrieved July 6, 2021, from https://ies.ed.gov/ncee/wwc/
- Jossie ML (2019). Evidence-based practice in offender programming: An examination of the CrimeSolutions.gov Registry. UNF Graduate Theses and Dissertations. 875. Retrieved from https://digitalcommons.unf.edu/etd/875. [Google Scholar]
- Macklem GL (2020). Chapter 21: Registries of Evidence-based Programs. In Brief SEL Interventions at School (pp. 69–94). Springer Nature Switzerland AG. [Google Scholar]
- Maranda MJ, Magura S, Gugerty R, Lee MJ, Landsverk JA, Rolls-Reutz J,& Green B (2021). State behavioral health agency website references to evidence-based program registers. Evaluation and Program Planning. Apr; 85. 10.1016/j.evalprogplan.2021.101906. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Maranda MJ, Lee-Easton MJ, Magura S (2022). Variations in definitions of evidence-based interventions for behavioral health in eight selected U.S. states. Evaluation Review. May 11:193841X221100356. doi: 10.1177/0193841X221100356. [DOI] [PubMed] [Google Scholar]
- Means SN, Magura S, Burkhardt JT, Schröter DC, & Coryn CLS (2015). Comparing rating paradigms for evidence-based program registers in behavioral health: Evidentiary criteria and implications for assessing programs. Evaluation and Program Planning, 48, 100–116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Malekinejad M, Horvath H, Snyder H, & Brindis CD (2018). The discordance between evidence and health policy in the United States: The science of translational research and the critical role of diverse stakeholders. Health Research and Policy Systems, 16(1). 10.1186/s12961-018-0336-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mihalic SF& Elliott DS (2015). Evidence-based programs registry: blueprints for Healthy Youth Development. Evaluation and Program Planning, 48, 124–31. [DOI] [PubMed] [Google Scholar]
- Modini M, Tan L, Brinchmann B, Wang MJ, Killackey E, Glozier N, … Harvey SB (2016). Supported employment for people with severe mental illness: Systematic review and meta-analysis of the international evidence. British Journal of Psychiatry, Jul 2019(1), 14–22. 10.1192/bjp.bp.115.165092 [DOI] [PubMed] [Google Scholar]
- Neuhoff A, Axworthy S, Glazer S, & Berfond D (2015). The What Works Marketplace: Helping Leaders Use Evidence to Make Smarter Choices. Retrieved from https://www.bridgespan.org/bridgespan/Images/articles/the-what-works-marketplace/the-what-works-marketplace.pdf
- Office of Juvenile Justice and Delinquency Programs(n.d.). Model Programs Guide. 2021. Retrieved July 6, 2021, from https://www.ojjdp.gov/mpg
- Open Minds (2020, May 6). MARKET INTELLIGENCE REPORT | May 6, 2020
- The U.S. mental health market: $225.1 billion in spending in 2019: An OPEN MINDS Market Intelligence Report. Retrieved from https://openminds.com/intelligence-report/the-u-s-mental-health-market-225-1-billion-in-spending-in-2019-an-open-minds-market-intelligence-report/
- Paulsell D, Thomas J, Monahan S, & Seftor NS (2017). A Trusted Source of Information: How Systematic Reviews Can Support User Decisions About Adopting Evidence-Based Programs. Evaluation Review, 41(1), 50–77. 10.1177/0193841X16665963 [DOI] [PubMed] [Google Scholar]
- Petrosino A (2014). Integrating evidence on violence prevention: An introduction. In Carroll L, Perez MM, Taylor RM, Forum on Global Violence Prevention, Board on Global Health, Institute of Medicine, & National Research Council (Eds.), The Evidence for Violence Prevention Across the Lifespan and Around the World: Workshop Summary (pp. 87–94). Washington, DC: The National Academies Press. [PubMed] [Google Scholar]
- Pew-MacArthur Results First Initiative. (2014). Evidence-based policymaking: A guide for effective government. Pew-MacArthur Results First Initiative, 30. Retrieved from http://www.pewtrusts.org/es/research-and-analysis/reports/2014/11/evidence-based-policymaking-a-guide-for-effective-government. [Google Scholar]
- Pew-MacArthur Results First Initiative. (2017). How states engage in evidence-based decision-making. Retrieved from https://www.pewtrusts.org/en/research-and-analysis/reports/2017/01/how-states-engage-in-evidence-based-policymaking.
- Pew Charitable Trusts. (2020). Results First to help states sustain evidence-based policy-making. Retrieved from https://www.pewtrusts.org/en/research-and-analysis/articles/2020/08/19/results-first-to-help-states-sustain-evidence-based-policymaking.
- Pew Charitable Trusts. (n.d.). Results First Clearinghouse Database. Retrieved July 8, 2021, from https://www.pewtrusts.org/en/research-and-analysis/data-visualizations/2015/results-first-clearinghouse-database
- Rady Children’s Hospital San Diego (n.d.). California Evidence-based Clearinghouse for Child Welfare. Retrieved July 8, 2021, from https://www.cebc4cw.org/
- Results for America (2018). Invest in what works state standards of excellence report. Retrieved from https://results4america.org/tools/state-standard-of-excellence-2018-invest-in-what-works-state-standard-of-excellence/.
- Reynolds K & Ramakrishnan K (2018). Evidence-Based Policymaking at the State Level: A Guide for Governors. Urban Institute. Retrieved from https://www.urban.org/research/publication/evidence-based-policymaking-state-level/view/full_report. [Google Scholar]
- Rieckmann TR, Kovas AE, Cassidy EF, & Mccarty D (2011). Employing policy and purchasing levers to increase the use of evidence-based practices in community-based substance abuse treatment settings: Reports from single state authorities. Evaluation and Program Planning, 34(4), 366–374. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rieckmann TR, Abraham A, Zwick J, Rasplica C, & McCarty D (2015). A longitudinal study of state strategies and policies to accelerate evidence-based practices in the context of systems transformation. Health Services Research, 50(4), 1125–1145. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stephenson AL (2017). Journey Toward Evidence-Based Status: Seeking Admission to Formal Program Registries. Health Promotion Practice, 18(5), 681–687. 10.1177/1524839916670575 [DOI] [PubMed] [Google Scholar]
- Substance Abuse and Mental Health Services (SAMHSA). (n.d). Evidence based Practices (EBP) Resource Center. Retrieved July 8, 2021, from https://www.samhsa.gov/resource-search/ebp.
- Substance Abuse and Mental Health Services Administration (n.d.). National Registry of Evidence-based Programs and Practices. Retrieved July 8, 2021, from https://www.pewtrusts.org/en/research-and-analysis/data-visualizations/2015/results-first-clearinghouse-database
- Taylor K (2021). What’s the difference between statutes and regulations? Retrieved from https://libguides.colostate.edu/c.php?g=906786&p=6659470.
- Test DW, Kemp-Inman A, Diegelmann K, Hitt SB, & Bethune L (2015). Are Online Sources for Identifying Evidence-Based Practices Trustworthy? An Evaluation. Exceptional Children, 82(1), 58–80. 10.1177/0014402915585477 [DOI] [Google Scholar]
- University of Adelaide. (n.d.). Julia Briggs Institute. Retrieved July 8, 2021, from https://jbi.global/
- University of Colorado Boulder Institute of Behavioral Science (n.d.). Blueprints for Healthy Youth Development. 2021. Retrieved July 8, 2021, from https://www.blueprintsprograms.org/program-search/
- University of Oklahoma Health Sciences Center (n.d.). Suicide Prevention Resource Center. Retrieved July 8, 2021, from https://www.sprc.org/
- University of York (n.d.). The Centre for Reviews and Dissemination. Retrieved July 8, 2021, from https://www.york.ac.uk/crd/
- Walker SC, Lyon AR, Aos S, & Trupin EW (2017). The Consistencies and Vagaries of the Washington State Inventory of Evidence-Based Practice: The Definition of “Evidence-Based” in a Policy Context. Administration and Policy in Mental Health and Mental Health Services Research, 44(1), 42–54. 10.1007/s10488-015-0652-y [DOI] [PubMed] [Google Scholar]
- Westbrook TR, Avellar SA, & Seftor N (2017). Reviewing the reviews: Examining similarities and differences between federally funded evidence reviews. Evaluation Review, 41(3), 183–211. [DOI] [PubMed] [Google Scholar]
- Westlaw Legal Research Database. (2021). Retrieved from https://legal.thomsonreuters.com/en.
- White House. (2021). Memorandum on restoring trust in government through scientific integrity and evidence-based policymaking. Retrieved from https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/27/memorandum-on-restoring-trust-in-government-through-scientific-integrity-and-evidence-based-policymaking/.
- Williams V, Boylan AM, & Nunan D (2019, October 1). Qualitative research as evidence: Expanding the paradigm for evidence-based healthcare. BMJ Evidence-Based Medicine, 24(5), 168–169. 10.1136/bmjebm-2018-111131 [DOI] [PubMed] [Google Scholar]
- Zack MK, Karre JK, Olson J, & Perkins DF (2019). Similarities and differences in program registers: A case study. Evaluation and Program Planning, 76, 101676. 10.1016/j.evalprogplan.2019.101676 [DOI] [PubMed] [Google Scholar]
- Yingling DL & Mallinson DJ (2020). Explaining variation in evidence-based policy making in the American states. Evidence & Policy, 16(4): 576–596. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The coded data will be deposited in the following public archive - National Addiction and HIV Data Archive Program https://www.icpsr.umich.edu/web/pages/NAHDAP/index.html
