Abstract
Evaluations of behavioral health interventions have identified many that are potentially effective. However, clinicians and other decision makers typically lack the time and ability to effectively search and synthesize the relevant research literature. In response to this opportunity, and to increasing policy and funding pressures for the use of evidence-based practices, a number of “what works” websites have emerged to assist decision makers in selecting interventions with the highest probability of benefit. However, these registers as a whole are not well understood. This article, which represents phase one of a concurrent mixed methods study, presents a review of the scopes, structures, dissemination strategies, uses, and challenges faced by evidence-based registers in the behavioral health disciplines. The major findings of this study show that in general, registers of evidence-based practices are able, to a degree, to identify the most effective practices and meet the needs of decision makers. However, much needs to be done to improve the ability of the registers to fully realize their purpose.
Keywords: Evidence-based practices, empirically supported treatments, best practices, evidence-based program registers
1. Introduction
Behavioral health providers are increasingly tasked with the use of practices that have demonstrated the strongest impacts in high quality research analyses. With this increased focus on the use of “evidence-based programs” (EBPs), a number of “what works” organizations (including those that produce EBP websites, hereafter referred to as ‘registers’ or evidence-based program registers [EBPRs]) have emerged to assist policy makers and practitioners in selecting interventions1 with the greatest potential benefit to individuals and society. This paper presents a study of EBPRs that are centered on behavioral health interventions, assessing the degree to which they are likely to assist decision makers with the implementation of effective behavioral health programming.
The mandate for the use of EBP is clear: President Obama has articulated his administration’s commitment to “eliminating what we don’t need, or what doesn’t work, and improving the things that do” (Rochelson, 2009, n. page). This agenda was buttressed further in a 2013 policy memorandum from the United States Office of Management and Budget (OMB), which placed a clear directive for agencies to use credible evidence in the formulation of their budget proposals and their performance plans (OMB, 2013). Additionally, the OMB encouraged agencies to find new evidence of effective ways to address current “policy challenges” (OMB, 2013, p. 2) and also to fund those programs that are “backed by strong evidence of effectiveness while trimming activities that evidence shows are not effective” (p. 2).
Moreover, states are increasing emphases on the use of evidence-based practices in a variety of practice settings, including drug addiction, mental health, nursing, education, and criminal justice (Hawai’i State Center for Nursing, 2013; Minnesota Department of Corrections, 2011). At the extreme, a recent review of state mandates for programming within the substance abuse field (Reickmann, Kovas, Cassidy, & McCarty, 2011) revealed that in 2011, five states had current legislative mandates for the use of EBPs, as compared to one state in each of the prior two years (Reickmann, Kovas, Fussell, & Stettler, 2009; Reickmann, Kovas, Cassidy, & McCarty, 2011). This proliferation of mandates indicates that service agencies expecting government funding will be increasingly required to use practices that are based on empirical evidence of effectiveness.
In order to improve the delivery of empirically backed interventions, the OMB has stated that “rigorous, independent program evaluations can be a key resource in determining whether government programs are achieving their intended outcomes as well as possible and at the lowest possible cost” (Orszag, 2009, p. 1). With the increasing focus on the use of evidence-based practices to improve service delivery, there has been a deluge of scholarly and popular writings on what exactly constitutes an evidence-based practice. For example, in late 2009, a simple Google search of the term “evidence-based practice” yielded more than one million entries, in contrast to 74,000 entries in Google Scholar. In early 2014, the same search yielded more than 2.5 million entries in Google and over 200,000 in Google Scholar. A search for the term “evidence-based programs” in 2009 yielded about 60,000 results for Google and about 3,000 results in Google Scholar, while in early 2014 that search yielded approximately 200,000 and 8,000, respectively. This highlights the importance of understanding how such a massive amount of information about evidence-based programs can be made useful for practitioners, policymakers, administrators, and the public at large. (Boruch & Rui, 2008). Thus, effective mechanisms for access to credible and timely information about evidence-based practices are needed in order to support decision makers.
In response to this need, a number of EBPRs have been developed. These EBPRs in general serve an important function, but there is evidently room for improvement. For example, a study conducted by the Government Accountability Office (United States Government Accountability Office, 2010) found a variety of problems related to the implementation of the What Works Clearinghouse (WWC), including gaps in performance measures, lack of awareness and use of the WWC by target audiences, delayed timeliness of product release, lack of transparency as to why studies did not qualify for review, and presentation of limited information due to exclusion of certain studies that did not meet design standards.
Another limited review conducted by the Government Accountability Office in 2009 focused on federally supported initiatives aimed at identifying “effective interventions in order to provide insight into the choices of procedures and criteria those other independent organizations made in attempting to achieve a similar outcome as the Top Tier Initiative” (United States Government Accountability Office, 2009, p. 5). This review primarily described the Top-Tier Initiative, rather than the comparing existing EBPRs. Finally, a 2011 review of interventions included in the National Registry of Evidence-Based Practices and Programs (NREPP) did not deeply consider the structure of the register itself (Hennessy & Green-Hennessy, 2011), but did provide insight that was important for supporting implementation of EBPRs.
Despite these studies, relatively little is known about the broader collection of behavioral health registers in terms of how they are constructed and funded, in what ways they make decisions about what interventions to include, and the ways in which a potential user may make use of the information contained in the registers. The purpose of this paper is to describe existing EBPRs relevant to behavioral health according to their purposes, audiences, funding sources, marketing strategies, search functions, processes for identifying studies for inclusion, standards of evidence, dissemination of results, and challenges faced. We seek to understand if these registers are likely to produce the kind of valid, reliable, and timely information the typical decision maker will need.
2. Methods
The present study represents the first phase of a sequential mixed methods study. Phase II, which includes a systematic review of the criteria and standards used by EBPRs to certify programs or modalities of interventions as being evidence-based is presented in a companion paper in this issue, “Comparing Rating Paradigms for Evidence-Based Practice Registers in Behavioral Health” (Means, Magura, Burkhardt, Schröter, Coryn, 2015).
2.1 Sample
The population of interest for the present study was registers of evidence-based programs in behavioral health. Behavioral health is defined as:
“An umbrella term for care that addresses behavioral problems bearing on health, including patient activation and health behaviors, mental health conditions and substance use, and other behaviors that bear on health” (Peek et al, 2013).
Evidence-based behavioral health interventions are behavioral health interventions whose effectiveness and appropriateness for implementation are supported by empirical data derived from systematic scientific inquiry (Chambless & Hollon, 1998; Southam-Gerow & Prinstein, 2014).
Evidence-based behavioral health registers were defined in this study as being (1) web-based collections of (2) behavioral health interventions that use (3) documentable criteria for including and excluding programs or interventions, and (4) feature evaluative information that could support decision making. Online registers were chosen as we believe that printed listings of EBPRs (such as reports or journal articles) do not provide the kind of timely information needed by decision makers, and that decision makers would likely access searchable web-based resources in preference to static printed materials, which can quickly become out of date.
In order to identify the sample for the study, we began with a set of resources (reports, resource lists, linking sites, practice manuals, and evidence-based program registers), conducted a search of the web for additional resources, and developed a final pool of 129 candidate EBPRs. The study inclusion/exclusion criteria listed above were applied, reducing the list to the final candidates. EBPRs that focused exclusively on medical treatment, physical health or general education programming (e.g., the Best Evidence Encyclopedia; Johns Hopkins University Center for Data Driven Reforms in Education, 2013) were then removed from the study, as were registers that solely relied on another register for their content (OJP Model Programs Guide, FindYouthInfo, SPRC). Despite the fact that a large majority of the interventions reviewed by the Cochrane Collaboration are medically-related, many do address behavioral health, and given the Cochrane Collaboration’s highly regarded position in the evidence-based practice discussion, it was included in the study. As shown in Table 1, the final number of registers meeting the inclusion criteria was 20.
Table 1.
Register Name | Short Name/Abbreviation | Sponsor | Operator |
---|---|---|---|
Blueprints for Healthy Youth Development | Blueprints | Anne E. Casey Foundation | CSPVa |
California Evidence-based Clearinghouse for Child Welfare | CEBC | CDSSb | RADYc |
CDC Diffusion of Evidence Based Interventions | Effective/HIV | CDC | Danya International |
CDC Prevention Research Synthesis Project | Compendium/HIV | CDC | CDC |
Child Trends.Org (LINKS database) | ChildTrends | Stewartd | |
CrimeSolutions.gov | CrimeSolutions.gov | OJPe | DSG |
Effective Child Therapy | Effective Child Therapy | APA53f | APA53 |
Evidence-Based Practices for Substance Use Disorders | Washington | WSDASAg | University of Washington/NFATTCh |
Home Visiting Evidence of Effectiveness | HomVEE (official) | DHHSACFi | MPRj |
National Registry of Evidence-based programs and practices | NREPP | SAMHSAk | SAMHSA |
OAHS Teen Pregnancy Prevention | TPP | DHHSOAHl | MPR |
PracticeWise | Practicewise | Practicewise LLC | Practicewise LLC |
Promising Practices Network on children, families, and communities | PPN | PPN | PPN |
Resource Center for Adolescent Pregnancy Prevention | ReCAPP | –m | ETR Associates |
Social Programs That Work/Top Tier Evidence | Social Programs that Work | McArthur | Coalition |
The Campbell Collaboration | Campbell | –n | Campbell Collaboration |
The Centers for Reviews and Dissemination – York | CRD York | NHIo | University of York |
The Cochrane Collaboration | Cochrane | –n | Cochrane Collaboration |
The Guide to Community Preventive Services - The Community Guide: What works to promote health | The Community Guide | CDC | CDC |
The What Works Clearinghouse | WWC | DOEp | MPR/DSG |
CSPV – Center for the Study of the Prevention of Violence
CDSS – California Department of Social Services
RADY – Rady Children’s Hospital, San Diego
Stewart – The Alexander and Margaret Stewart Trust
OJP – U.S. Office of Justice Programs
APA53 – American Psychological Association, Division 53
WSDASA – Washington State Department of Alcohol and Substance Abuse
NFATTC – Northwest Frontier Addiction Technology Transfer Center
DHHSACF – US Department of Health and Human Services, Adolescents, Children, and Families
MPR – Mathematica Policy Research
SAMHSA – US Substance Abuse and Mental Health Services Administration
DHHSOAH – US Department of Health and Human Services, Office of Adolescent Health
Unknown – Unavailable for interview
Multiple funding agencies
National Institute for Health Research, UK
US Department of Education
2.2. Document Review
A classification scheme was developed based on an initial review of a subsample of web-based documents, the Cochrane Collaboration Standards for meta-analysis, and advisory board input. An early draft of this classification scheme was applied to a sample of registers to calibrate the coders and to detect potential issues with interpretation of the categories. Additional refinements to the scheme were made during the classification process and at the time of the interviews with register managers. When revisions were made, all register websites were revisited and their classifications were updated to assure the accuracy of the document review. The registers were classified by two independent reviewers, who subsequently discussed discrepancies until 100% agreement was achieved. Reliability was not calculated for the classification given that the classification scheme was intended for description only, and since the research team ultimately negotiated consensus on the classifications.
2.3. Interviews
An interview protocol was developed with feedback from the expert panel, and included questions about the registers’ background, funding and support, website functioning, users, marketing, dissemination, competitors, sustainability, and other pertinent information. Register managers (individuals in the operating organization of each register with knowledge about the history, context, and functioning of the register) were provided with the interview protocol and a register profile (coded information from the website), and subsequently interviewed in order to verify and supplement the collected data. These documents were revised based on the interviews, and the register managers were provided with the revised documents for final verification post-interview. Interview data was synthesized across major themes to determine commonalities and differences across registers, and was added to the coding information during the analysis portion of the study. Recruitment was accomplished by sending an initial scheduling e-mail, with a follow up email being sent if there was no response. If no contact was made through email, an attempt was made to schedule by phone. Interviews were conducted with 17 (85%) of the included registers. Three registers were unable to reached for scheduling (15%)
3. Results
The registers included in the review (N = 20) have been in existence for an average of about 9 years (M = 9.4, SD = 5.5). For those registers that primarily included individual programs, between 30 and 660 programs or interventions (M = 156, SD = 169) were included in their databases. Seventeen of the registers (85%) featured an explicit and locatable statement of purpose. Typically these purposes included “to support practitioners in the delivery of effective services,” “to inform the implementation of effective programs at the program design level,” and/or “to help consumers and their family members to become more aware of available evidence-based treatments.” To that end, twelve registers (60%) included individual programs, while four registers (20%) included meta-analyses or systematic reviews. Three registers (15%) included a combination of individual programs and modalities, and one register (5%) included modalities (classes of treatments) only.
Funders
Nine registers (45%) were funded by government agencies such as Federal and State Government agencies (i.e. CDC, OJP, California State) for US based registers, and international governments (i.e. United Kingdom, Norway) for the two internationally based registers (total of 11 governmentally funded registers, 55%). Three of the registers (15%) were primarily funded by philanthropic foundations such as the Anne E. Casey foundation, the MacArthur Foundation, and the William T. MacArthur Foundation. Three registers (15%) were funded through alternate means such as project based contracts (PracticeWise), membership fees (Effective Child Therapy), and funding from network members (Promising Practices Network). One register, the Cochrane Collaboration had primary funding from multiple types of sources. Finally, two registers had unknown funding sources, since they could not be reached for interview, and it was not apparent on their website who the funder was.
Eight registers (40%) were originally developed from major federal initiatives aimed at addressing broad social problems. CrimeSolutions.gov, for instance, was funded by the U. S. Department of Justice, Office of Justice Programs to address crime and recidivism, and the What Works Clearinghouse was funded by the U.S. Department of Education to address problems in educational technology and school administration. Other notable federally funded registers include the National Registry for Evidence-Based Programs and Practices (SAMHSA) and Social Programs that Work. Two registers were funded on the state level (the California Child Welfare Clearinghouse [CEBC], and Washington state’s Evidence-Based Practices for Substance Use Disorders), although CEBC has achieved use outside of its original scope. These tended to be more narrowly focused than the federal registers – for example CEBC is specifically focused on issues related to child welfare services, while the Washington register is only focused on substance abuse programs specifically.
Those registers that are funded by charitable foundations tended to value their independence from funding source pressure (for example, perceived political pressure from agencies within the government). They tended to translate this level of independence into the idea that they were capable of being more objective than other registers. However, for those registers that were funded by the government, there was no counter-perception noted that government ties reduced their objectivity.
Ninety percent of the registers provided their information for free. However, two registers provide partial information for free and require membership for users who would like to access more detailed information. Practicewise, for example, was initially funded through a grant but now supports itself through membership-based access to their database as well as other clinical tools. These memberships are held by state agencies, clinical practices, and treatment professionals.
Audiences and users
Register users included national and international practitioners; researchers and grant writers; government employees on the local, state, and national level; clients of agencies; community leaders; and advocacy groups. Overall, 60% of the registers intended to reach a national audience. However, because almost all registers provide information for free, many indicated reaching audiences beyond their intended target audiences. Eight registers (40%) reported international reach, although only three intended to reach an international audience (i.e., the Cochrane Collaboration, the Campbell Collaboration, and the Centers for Reviews and Dissemination at York). Moreover, all three of these had some form of connection with one another. The Cochrane and Campbell collaborations share some staff, and the Center for Reviews and Dissemination features Cochrane Collaboration reviews in their database. The Cochran and Campbell collaborations also have co-registered interventions with identical supporting information.
Uses
While user groups are broad and often beyond the intended audience, interviews indicated that the primary uses of the register were linked to government mandates and related funding requirements for using practices with an evidentiary base. Examples of this can be found in recent funding announcement of the Department of Education where clear requirements to follow WWC standards are made. Similarly, the Office of Adolescent Health in the Department of Health and Human Services oversees the Teen Pregnancy Prevention Initiative, which requires that 75% of all funds spent by a program funded under the initiative be spent on evidence-based programming. In these cases, the EBPRs have focused on the specific requirements of their granting organizations in developing their criteria and standards.
Marketing strategies
To promote use, many registers rely on word of mouth advertising, grant solicitations, subscription lists, presentations and booths at conferences, journal articles, and links on websites. Interviews suggested that not all registers have an explicit marketing plan to broaden dissemination and use. Nevertheless one in five register managers mentioned using direct marketing strategies such as issuing press releases and brochures. Many register managers indicated that efforts to establish more formal marketing strategies would be made in the future.
Search functions
A particular issue related to usability that was noted during the study related to search functions used by the register. Fifteen registers (75%) featured search functions, 4 registers (20%) used a hyperlink-based system, and one register displayed all programs and practices on its main page. Of the fifteen registers that provided a search function, six had what might be considered to be a basic search only; five provided a basic search with options for advanced follow-up should the user need more searching power; three provided a basic search with post-search filters available to refine search results; and one provided only an advanced search.
An example of basic search functionality can be found in Effective Interventions, a large HIV prevention register. This register features a simple keyword search with no option for advanced constraints on the search criteria. Searching this register yields results that are broad in scope and numerous; for example, a keyword search for the term “condom” resulted in 36 pages of results, ranging from specific programs to broad review of the general effectiveness of condom distribution.
On the opposite end of the spectrum is the Cochrane Collaboration, which is a multinational repository of systematic reviews and meta-analyses. On its main page, this repository uses a keyword search as its initial portal, and then provides sidebars with criteria that can help the user filter results. This register also has a subsidiary library page with more in-depth reviews, which features a more complex search function. The main search for the library page includes a drop down filter box that allows the user to search by title/abstract/keywords, record title, authors, abstract alone, keywords alone, tables, publication type, source, and DOI number. Additionally, search terms may be entered using Boolean operators.
A third strategy for obtaining search results from the registers is exemplified by the Promising Practices Network, an older register that is funded by multiple public and private nonprofits, and Social Programs that Work, a large, federally funded register. These registers use a more traditional link-style navigation to direct the user to specific programs which represent broad demographic categories or broad types of social problems – the user may need to have some sense of what they are looking for when using this type of search. Registers using this type of navigation either have a classic menu style page, an interactive branching wizard, or a series of screens that when followed would lead the user to a particular subset of records from the database. An example of the branching/limiting system can be found in the Effective Child Therapy register, which takes the user through a series of pages to help narrow the search. Effective Child Therapy also has two separate areas on their website, one for practitioners and one for the public which is meant to help individuals find tailored information. However, the portion of the site that is devoted to the public provides considerably less information than the one directed at practitioners, and seems to lack any detailed summary result of their information reviews.
Identification of studies
More than half of the registers (55%) actively search the literature for evidence of effective programs; three registers (15%) only accept nominations of programs for review; and six (30%) use a combination of nominations and active searches to identify effective programs. Literature searches typically focus on specific priority areas of the registers or the funding organization. Moreover, several register managers indicated that peer-reviewed publications are important for considering a program for inclusion. Once identified, studies were assessed by 2 to 13 reviewers, depending on the register. Lists of reviewers were available on half of the register websites.
Standards of evidence
The registers varied in the criteria and standards used to identify EBPs. These criteria and standards are addressed in Means, et al (this issue), and are not presented at length in this article. It is noted here that in general, each of the interviewed register managers felt that their standards of evidence tended to be well researched, and would provide sufficient information to back their conclusions about evidence-based programming. The standards were typically based on well accepted hierarchies of evidence, and were absolute in nature (they did not compare interventions). The Campbell Collaboration representative did note that while they in general endorsed their overall paradigms for evidence, there were often variations in the use of that paradigm amongst review groups.
Dissemination
Registers also varied in how often they updated their websites. Five registers (25%) reported updating their websites as reviews become available. Two registers (10%) updated annually, three updated biannually (15%), one updated monthly (5%), three updated weekly (15%), and six updated on less stable or unknown schedules (30%). An important aspect of the update rate was differentiating the rate at which new reviews were added from the rate at which existing reviews were updated. While new reviews could often be completed within a year or less, the timelines for re-review was often measured in number of years since the original publication, if the evidence base was re-reviewed at all. The timelines for reviews of modality (meta-analysis) registers were slower still, taking two to five years. At the time of this study, nine of the registers had existed for 10 years or more, indicating that they likely had time to conduct formal re-reviews of included interventions. For those registers, a number of challenges were described in completing re-reviews, which are described below.
Challenges
During the interviews, register managers were asked about the primary challenges they faced when maintaining evidence-based registers. Resource constraints emerged as the primary challenge noted by all register managers. The program review processes were often labor intensive, and complex reviews require large numbers of experts to complete in a timely manner. Typically these reviewers are staff members of the registers themselves, meaning that their salaries must be zero-sum with other project costs. For registers like the Cochrane Collaboration, which features more diffuse funding, reviewers may be university professors who are conducting reviews on university time, or who have received a grant external to Cochrane to conduct reviews. For those registers who directly fund reviewers, the zero-sum requirement leaves the register managers with two options: (1) minimize the rigor with which their reviews are conducted, thus reducing the amount of hours they are required to pay for, or (2) have fewer reviewers, and thus reducing the capacity of the register to conduct reviews and extending the time needed to adequately conduct those reviews.
A second challenge related to scarcity of resources was timeliness of the program reviews. Register managers uniformly stated that the process for identifying and reviewing candidate programs was extremely time-consuming. This was especially true for registers like the Cochrane Collaboration, which conduct systematic reviews that address complex social health and medical issues. Cochrane Collaboration staff mentioned that some of their reviews can take more than two years to complete, and they constantly have to decide which reviews are the most important to conduct. Other registers also described lengthy review processes, sometimes with multiple phases, peer review panels, and subject matter experts involved. In the case of the larger registers this creates a significant lag between the appearance of a program and its listing in a register. Some register managers mentioned that the lag in the review process made it difficult to support practitioners adequately. A parallel challenge noted by the register managers was that reviews were often difficult to keep up-to-date, especially for those programs that were hard to contact.
A third challenge was optimizing the methods of communication with the register users, to maximize the prospects that review information will be used. In essence, there is a tension between the amount and depth of information collected during the review versus what can be absorbed by users. Many of the register managers struggled with how to report the results of the review process with enough detail that one could fully appreciate the evidence base for a program, while providing that information in a manner that was accessible to the lay-user.
4. Discussion
Funding sources and register autonomy
A number of the registers were created to meet the needs of particular agencies, and in those cases, register policies were governed by those agencies. For example, the IES WWC is a tool used by the DOE to disseminate best practices in education, and is not at will to develop their own review agenda outside of the priorities of the department. One may expect, then, that political influences may have a role in the selection of programs or modalities that are ultimately judged to be evidence-based. However, the WWC representatives made the claim that they were “politically independent” from their home organization. Additionally, their stringent review process is intended to preserve objectivity during the review process. At the same time, however, there is a potential for bias that enters the review process. It is unknown within the scope of this review the extent to which the IES funds research or supports new research at sites. Thus, there is the potential to develop a strong feedback loop where sites funded by the IES are required to use evidence-based practices to receive funding, but in order to be on the list of evidence-based practices, the program would have had to have been tested at a site funded by IES, which requires the use of evidence-based practices to receive funding. Eventually, a tipping point will be reached where the available programs that could be reviewed by WWC will be limited to programs that are politically attractive to the IEC. This potential for indirect political influence may be evident in other register-funder relationships as well.
Another example of indirect funding bias would be the degree to which funding agencies fund experimental reviews of new programs in general. Funding agencies have finite resources, and as such, must make decisions about how to effectively allocate resources. It is reasonable to assume that not all new programs are deemed worthy of being researched using a funder’s resources. Thus, it is possible that studies of programs that are judged to be politically viable may be more likely to be funded than those that are not viewed as viable. As in the previously mentioned situation, it is possible that political influence is allowed into the review process. The extent to which this influence actually impacts the review process may be hard to determine, given that many registers do not present lists of programs that were found but never reviewed.
Additionally, funding was mentioned as being a challenge to completing reviews by most of the registers. This was especially true for registers that conducted literature reviews themselves (instead of taking nominations). For these registers, a large portion of the resources allocated to the register go to the review process. This results in less overall funding for website maintenance and other infrastructure related tasks, which could ultimately impact utility of the registers. Another potential impact of resource limitation is the reduction in scope of the review process in general. It would be reasonable to question whether a register with less funding would be more likely to overlook obscure and hard to find studies. Indeed, many registers certified programs on the basis of only one or two studies, even in the case of some well-known interventions. An analysis of publication bias would be beneficial to understanding the likelihood that funding impacts register acuity in locating studies.
Users and uses
While broad groups of audiences and uses of EBPRs were mentioned on websites and by register managers, there appears to be a void of information about user groups and uses. This study identified a number of target users that the registers were intended to serve. For the registers that operate within a strong mandate-to-practice context, such as the CEBC, HomVEE, and the OAH TPP, it is safe to assume that critical users are receiving the information they need. However, for users not in that context, the assumption that their needs are met may not hold.
In essence, the way in which conclusions about programs are presented by the registers could potentially impede their use. The registers as a whole tended to present information about programs as compared to absolute standards of evidence. Thus, a user would have knowledge about the quality and strength of the evidence about a program “as it was studied.” The ultimate value of this type of information is limited by the fact that for many of the registers, programs are only vetted by one or two studies. For those users who simply care whether a program was shown to be effective at all, this type of information might be sufficient. However, in order to meet more complex user needs, the registers must then synthesize the evidence into a conclusion about each program’s potential value to a user.
The caveat to this is that program implementers may not need simple information. Programs are implemented in complex environments, that require the making of a number of trade-offs. Thus, the registers would do well to provide information not only about the efficacy of a program, but also about other factors related to program value. The dissemination literature provides some indicators, such as relative program advantage over business as usual, competitive advantage over other potential candidates, ease of implementation, cost-feasibility and cost-benefit, alignment with organizational values, and capacity of the organization to adopt innovations (Greenhalgh, Robert, Macfarlane, Bate, & Kyriakidou, 2004; Rogers, 1995; Rogers, 2002). Such analysis may well be beyond the reasonable capacity of the registers given time and resource constraints. However, the greater the attention paid to such factors in a register’s analysis of a program’s value (above program impact), the greater the likelihood that the register could foster dissemination of a program. A second caveat is the assumption that the purpose of the register is in fact to foster dissemination rather than simply to provide information.
Finally, we note that people potentially use EBP information in two primary ways: (1) selecting a new program to implement and (2) validating an existing program. For individuals who use the register to validate existing programs, it may be enough to have simply information about the program being evidence-based (i.e., single tier or inclusion/exclusion type). Periodically produced hard-copy lists might even be sufficient. However, individuals who must select a program for implementation may need extended information, such as the relative advantage of a program or modality over others. In this case registers that feature tiers of evidence are more appropriate. No register offered any sense of comparative standards, which in effect makes all evidence-based interventions equal. However, this logical assumption may not be true.
In any case, more information is needed about how users employ the information presented in EBPRs, what types of information would best serve their needs, and ways to improve the delivery of content in meaningful ways.
Accessibility and transparency
The general lack of formal study of the users and uses of registers creates two potential problems. First, in relationship to accessibility, the development of search functions used by the registers needs to address user needs. That is, many users may be naïve to the intricacies of evidence-based programming, and may not have a specific program or search term in mind when they attempt to search a register. Thus the search functions within the register must be able negotiate the way a user would search the database with the way the information is contained in the review. For example, a user may need to search by class of intervention, as well as target population and anticipated resources used by the intervention. Additionally, searching by suggested outcomes may be useful in today’s results oriented programming environments. By providing multiple ways to search for interventions, the register may be able to return the most useful set of research results to the user as possible.
The second issue involves transparency. The transparency of a register may be said to be analogous to the degree to which an external reviewer could reliably come to the same conclusion about a program as the register did, based on what they know about the register’s protocols (observer independence). Factors such as differing criteria across registers, organizational contexts, the degree to which reviewer judgment is allowed, and the comprehensiveness of the search for supporting studies can impact or reduce this observer dependence. A discussion of the impact of differing criteria across registers appears in Means and colleagues’ paper in this issue. Additional efforts to increase register accessibility and transparency would be helpful.
Implications of mandates
Several of the registers included in the study supported mandates for the use of evidence-based practices. These mandates are intended to increase the use of EBP by those who receive funding. As recently as 2011, studies have demonstrated that mandating use can have a positive influence on the employment of EBP (see Rieckmann, et al 2011, and Weiss, et al, 2008 for discussion). However, the converse side of the argument demonstrates pushback from implementers about the dangers of “cookie cutter” type approaches that may develop when top-down mandates are used (Dopson, Locock, Gabbay, Ferlie, & Fitzgerald, 2003; Jette, et al., 2003; McGowan, 2010). An effective strategy is still needed to produce scientific and generalizable evidence that is flexible to user needs on an individual level.
Duplication of efforts
There are a number of programs that appear in more than one register, but are not always rated similarly across registers (see Means et al., in this issue). This effectively creates redundancy in the cases where registers agree and uncertainty in the cases where the registers disagree. Preventing redundancy could allow for increased coverage. While the managers of the registers did indicate that they interact and share knowledge, increased coordination across registers may be beneficial.
5. Conclusions
The fundamental premise of this study was that by aggregating supporting studies surrounding an intervention, the register can assist decision makers, researchers, clinicians, and other users who need information about evidence-based programs. This would require that the registers provide valid, reliable, and timely information about the evidence-base of interventions, and the evidence-base within disciplines in general. In terms of valid conclusions about intervention efficacy, the registers are able to do that given the extent to which they honor research designs that promote causal inference. The reliability of the information generated by the registers tends to be less certain – the register managers acknowledged that the scope and purpose of the registers impacted which interventions were reviewed and ultimately were accepted (albeit this acknowledgment was not often explicitly stated). We conclude then that these variations in scope, purpose, and contextual factors related to audience targeting could eventually lead to users getting inconsistent or even conflicting information about program effectiveness when consulting different registers. These questions are addressed more deeply in Means, et al (this issue). In terms of timeliness, the registers frequently noted that generating reviews in a manner that meets the time demands of decision makers was a constant challenge. In order for a register to be valuable to users, it must produce information more efficiently than the user could have done otherwise; funding and resource limitation may prevent the registers from accomplishing this.
In terms of generating information about the evidence base of particular interventions, the registers are able to accomplish a level of synthesis that the average practitioner or decision maker may be able to reach. However, in terms of developing an evidence-based across a discipline, the user may not gain any information about the relative advantages of one intervention over another (a critical factor in innovation diffusion) by simply looking at a register, since none of the registers rankings of interventions used relative standards – they all used absolute standards. Unless interventions are compared to each other by the registers, the user will still have a lot of work to do in order to make a decision about whether to implement a particular intervention. This is especially true in the case mentioned earlier where the user gets conflicting information about an intervention by visiting different registers.
Overall, registers of evidence-based information are valuable resources in the world of evidence-based programming writ large. However, further research into users and uses of the registers, and into the role of evaluation reports in the evidence-base of intervention could improve the value of EBPRs to practitioners, researchers, and decision makers in the future.
6. Lessons learned
Two primary lessons learned emerged from this study. First, the process of reducing the information from the registers into a set of pre-established and highly recognized standards (i.e., the Cochrane Collaboration standards) must be iterative and open to the diversity of approaches in the field. Our initial assumption was that the registers could be categorized in a fairly clean and consistent way. However, we learned that although the registers were similar in a lot of ways, there were subtle differences in approaches, definitions, assumptions, etc. that made categorization difficult. A lot of collaboration and revision went into the description of the registers. Additionally, replication of the decisions made in this study would require similar discussion, interpretation, and revision.
During the course of the investigation, the degree to which the registers were interconnected became apparent – many of the register managers were aware of each other’s work, and many of them have had collegial contact. Knowing this, additional interview questions could have been prepared. Future iterations of this type of research should recognize that many government agencies do interact, and inquiry into the “behind the scenes” networks that exist would be beneficial, as these agencies have a direct impact on how practices are funded.
Highlights.
The registers tended to be funded by a range of sources, including governmental agencies, non-profit organizations, and international organizations.
The registers tended to target their services towards program level decision makers and researchers. However, within the registers’ organizational knowledge, very little is understood about those user’s needs and the degree to which the registers meet those needs.
The registers sometimes lacked transparency – either their criteria for inclusion were hard to find, or the vetting process was not well delineated on the website.
Many of the registers cited limited resources as a challenge to providing timely reviews.
Acknowledgments
We would like to thank our advisory committee and expert consultants for their contributions to this project and product, particularly Katrina Bledsoe, Robert Boruch, Christina Christie Lois-Ellin Datta, Steward Donaldson, Jennifer Greene, George Julnes, Rebecca Kilburn, Frances Lawrenz, Michael Quinn Patton, Mitchell J. Prinstein, Roger Paul Weissberg, Brian T. Yates, and an unnamed Officer from the National Registry of Evidence-Based Programs and Practices. This article has been made possible through support from NIH NIDA (grant number 1R21DA032151-01).
Appendix - Search terms used to identify relevant EBPRs
At critical points throughout the study, searches of the internet were conducted in order to attempt to identify the relevant EBPRs for inclusion. The searches were replicated in both Google and Bing. Searches were not conducted using search engines that typically return research reports or articles (i.e., Google Scholar, PubMed, PsycInfo, etc.) as the results these searches would be outside of the inclusion criteria for this study. The following terms were used to conduct searches: “evidence-based practice behavioral health repository,” “evidence-based practice behavioral health register,” “register AND evidence-based practice,” “repository AND evidence-based practice,” “register AND evidence-based practice –nursing,” “evidence-based repository,” “evidence based registry,” “evidence-based behavioral health register,” “best practice registry,” “promising practice register,” “evidence-based repository register,” “evidence-based practice,” “evidence-based practice behavioral health,” and “evidence-based behavioral health.”
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Interventions in this paper will refer to the phrase “programs or modalities of interventions”.
Contributor Information
Jason T. Burkhardt, Project Manager, The Evaluation Center at Western Michigan University
Daniela C. Schröter, Director of Research, The Evaluation Center at Western Michigan University
Stephen Magura, Director, The Evaluation Center at Western Michigan University.
Stephanie N. Means, Project Manager, The Evaluation Center at Western Michigan University
Chris L.S. Coryn, Director, Interdisciplinary Ph.D. in Evaluation, Western Michigan University
References
* Register included in the analysis
** Register was reviewed and subsequently excluded from the study
- Boruch R, Rui N. From randomized controlled trials to evidence grading schemes: current state of evidence-based practice in social sciences. Journal of Evidence-Based Medicine. 2008;1(1):41–49. doi: 10.1111/j.1756-5391.2008.00004.x. [DOI] [PubMed] [Google Scholar]
- *.Center for the Study and Prevention of Violence - Institute of Behavioral Science, University of Colorado Boulder. Blueprints for Healthy Youth Development. 2013 Retrieved 2012–2013, from Blueprints for Health Youth Development: http://www.colorado.edu/cspv/blueprints/index.html.
- *.Chadwick Center for Children and Families - Rady Children’s Hospital, San Diego. The California Evidence-Based Clearinghouse for Child Welfare. 2006–2013 Retrieved 2012–2013, from The California Evidence-Based Clearinghouse for Child Welfare: http://www.CEBC.org.
- Chambless DL, Hollon SD. Defining Empirically Supported Therapies. Journal of Consulting and Clinical Psychology. 1998;66(1):7–18. doi: 10.1037//0022-006x.66.1.7. [DOI] [PubMed] [Google Scholar]
- *.Child Trends. ChildTrends/Links Databank. 2013 Retrieved 2012–2013, from ChildTrends/Links Databank: http://www.childtrends.org/
- *.Coalition for Evidence-Based Policy. Social Programs that Work/Top Tier Evidence. 2012 Retrieved 2012–2013, from Social Programs that Work/Top Tier Evidence: http://www.evidencebasedprograms.org.
- *.Danya International/Center for Disease Control. Effective Interventions: HIV Prevention that Works. 2012 Retrieved 2012–2013, from Effective Interventions: HIV Prevention that Works: https://www.effectiveinterventions.org/en/Home.aspx.
- Dopson S, Locock L, Gabbay J, Ferlie E, Fitzgerald L. Evidence-Based Medicine and the Implementation Gap. Health: An Interdisciplinary Journal for the Social Study of Health, Illness, and Medicine. 2003:311–330. [Google Scholar]
- *.ETR Associates. ReCAPP - Resource Center for Adolescent Pregnancy Prevention. 2007–2009 Retrieved 2012–2013, from ReCAPP - Resource Center for Adolescent Pregnancy Prevention: http://www.recapp.etr.org/recapp.
- Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of Innovations in Service Organizations: Systematic Review and Recommendations. The Milbank Quarterly. 2004:581–629. doi: 10.1111/j.0887-378X.2004.00325.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hawai’i State Center for Nursing. Hawai’i State Center for Nursing. 2013 Retrieved November 2, 2013, from Hawai’i State Center for Nursing: http://www.hinursing.org/best-practice-quality.htm.
- Hennessy KD, Green-Hennessy S. A Review of Mental Health Interventions in SAMHSA’s National Registry of Evidence-Based Programs and Practices. Psychiatric Services. 2011:303–305. doi: 10.1176/ps.62.3.pss6203_0303. [DOI] [PubMed] [Google Scholar]
- *.Institute of Education Sciences. What Works Clearinghouse. 2013 Retrieved 2012–2013, from What Works Clearinghouse: http://www.ies.ed.gov/ncee/wwc.
- Jette UD, Bacon K, Batty C, Carlson M, Ferland A, Hemmingway RD, Volk D. Evidence-Based Practice: Beliefs, Attitudes, Knowledge, and Behaviors of Physical Therapist. Physical Therapy. 2003;(83):786–805. [PubMed] [Google Scholar]
- **.Johns Hopkins University Center for Data Driven Reforms in Education. Best Evidence Encyclopedia. 2013 Retrieved 2012–2013, from Best Evidence Encyclopedia: http://www.bestevidence.org/
- McGowan N. Evidence-based Practice: A Debate. Journal of Emergency Nursing. 2010 Nov;36(6):577–578. doi: 10.1016/j.jen.2010.07.008. [DOI] [PubMed] [Google Scholar]
- Means S, Magura S, Burkhardt JT, Schröter D, Coryn C. Comparing Rating Paradigms for Evidence-Based Practice Registers in Behavioral Health. Journal of Evaluation and Program Planning. 2015;48:100–116. doi: 10.1016/j.evalprogplan.2014.09.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Minnesota Department of Corrections. Study of Evidence-Based Practices in Minnesota: 2011 Report to the Legislature. St. Paul, MN: 2011. Retrieved August 25, 2013, from http://www.doc.state.mn.us/publications/legislativereports/documents/12-10EBPreport.pdf. [Google Scholar]
- **.Office of Juvenile Justice and Delinquency Prevention. OJJDP Model Programs Guide. 2014 Retrieved 2012–2014, from http://ojjdp.gov/mpg.
- Orszag PR. Memorandum for the Heads of Executive Departments and Agencies (M-10-01) Washington, DC: 2009. Oct 7, [Google Scholar]
- Peek Ca. Lexicon for Behavioral Health and Primary Care Integration: Concepts and Definitions Developed by Expert Concensus. Rockville, MD: Agency for Healthcare Research and Quality; 2013. Retrieved May 2014, from http://integrationacademy.ahrq.gov/sites/default/files/Lexicon.pdf. [Google Scholar]
- *.PracticeWise LLC. Pracitce Wise. 2013 ( http://www.practicewise.com). Retrieved 2012–2013, from PracticeWise: http://www.aap.org/en-us/advocacy-and-policy/aap-health-initiatives/Mental-Health/Documents/CRPsychosocialInterventions.pdf#. See http://www.practicewise.com.
- *.RAND Corporation. Promising Practices Network on children, families, and communities. 2013 Retrieved 2012–2013, from Promising Practices Network on children, families, and communities: http://www.promisingpractices.net/
- Reickmann TR, Kovas AE, Cassidy EF, McCarty D. Employing Policy and Purchasing Levers to Increase the Use of Evidence-Based Practices in Community Based Substance Abuse Treatment Settings. Evaluation and Program Planning. 2011;34(4):366–374. doi: 10.1016/j.evalprogplan.2011.02.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Reickmann TR, Kovas AE, Fussell HE, Stettler NM. Implementation of Evidence Based Practices for Treatment of Alcohol and Drug Disorders: The Role of State Authority. The Journal of Behavioral Health Services and Research. 2009:407–419. doi: 10.1007/s11414-008-9122-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rochelson D. New and more efficient ways of getting the job done. 2009 Jan 7; Retrieved from Change.gov: http://change.gov/newsroom/entry/new_and_more_efficient_ways_of_getting_the_job_done/
- Rogers E. Diffusion of Innovations. New York: Simon & Schuster, Inc; 1995. [Google Scholar]
- Rogers EM. Diffusion of Preventive Innovations. Addictive Behaviors. 2002;7:989–993. doi: 10.1016/s0306-4603(02)00300-3. [DOI] [PubMed] [Google Scholar]
- Sackett DL, Rosenberg WM, Gray J, Haynes R, Richardson W. Evidence-based medicine: What it is and what it isn’t. British Medical Journal. 1996 Jan;312(13):72–72. doi: 10.1136/bmj.312.7023.71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Southam-Gerow MA, Prinstein MJ. Evidence Base Updates: The Evloution Evaluation of Psychological Treatments for Children and Adolescents. Journal of Clinical Child and Adolescent Psychology. 2014:1–6. doi: 10.1080/15374416.2013.855128. doi:http://dx.doi.org/10.1080/15374416.2013.855128. [DOI] [PubMed] [Google Scholar]
- *.Substance Abuse and Mental Health Services Administration. National Registry of Evidence-Based Programs and Practices. 2013 Retrieved 2012–2013, from National Registry of Evidence-Based Programs and Practices: http://www.nrepp.samhsa.gov.
- *.The American Psychological Association. Effective Child Therapy. 2012 Retrieved 2012–2013, from Effective Child Therapy, Evidence-based mental health treatment for children and adolescents: http://www.effectivechildtherapy.com/
- *.The Campbell Collaboration. The Campbell Library. 2013 Retrieved 2012–2013, from The Campbell Collaboration: http://www.campbellCollaboration.org.
- **.The Centre for Reviews and Dissemination - University of York. Prospero - The Register of Prospective Systematic Reviews. 2013 Retrieved 2012–2013, from Prospero - The Register of Prospective Systematic Reviews: http://www.crd.york.ac.uk/prospero/
- *.The Centre for Reviews and Dissemination - University of York. The Centre for Reviews and Dissemination. 2013 Retrieved 2012–2013, from Centre for Reviews and Dissemination: http://www.york.ac.uk/inst/crd.
- *.The Cochrane Collaboration. The Cochrane Collaboration. 2013 Retrieved 2013, from The Cochrane Collaboration: http://www.cochrane.org/
- *.The Community Preventive Services Task Force. The Community Guide. 2013 Sep 13; Retrieved 2012–2013, from The Guide to Community Preventive Services: What works to promote health: http://www.thecommunityguide.org.
- **.The Interagency Working Group on Youth Programs. FindYouthInfo.gov. 2014 Retrieved 2012–2014, from http://www.findyouthinfo.gov.
- *.United States Center for Disease Control. HIV/AIDS Prevention Research Synthesis Project. 2013 Retrieved 2012–2013, from HIV/AIDS Prevention Research Synthesis Project: http://www.cdc.gov/hiv/dhap/prb/prs/index.html.
- *.United States Department of Health and Human Services Office of Adolescent Health/Mathematica Policy Research. Evidence-Based Programs. 2014 Feb 26; Retrieved 2012–2014, from Teen Pregnancy Prevention Resource Center: http://www.hhs.gov/ash/oah/oah-initiatives/teen_pregnancy/db/tpp-searchable.html.
- United States Government Accountability Office. A Variety of Rigorous Methods Can Help Identify Effective Interventions. Washington DC: United States Government Accountability Office; 2009. [Google Scholar]
- United States Government Accountability Office. Improved Dissemination and Timely Product Release Would Enhance the Usefulness of the What Works Clearinghouse. Washington DC: United States Government Accountability Office; 2010. [Google Scholar]
- *.United States Office of Justice Programs. Crimesolutions.gov. 2013 Retrieved 2012–2013, from Crimesolutions.gov: https://www.crimesolutions.gov/
- United States Office of Management and Budget. Memorandum M-13-17. Washington, DC: United States Executive Branch; 2013. Jul 26, [Google Scholar]
- *.University of Washington Alcohol and Drug Abuse Institute. Evidence Based Practices for Substance Use Disorders. 2013 Retrieved 2012–2013, from Evidence Based Practices for Substance Use Disorders: http://lib.adai.washington.edu/EBPsearch.htm#.
- *.US Department of Health & Human Services Administration for Children and Families/Mathematica Policy Research. Home Visiting Evidence of Effectiveness. 2014 Retrieved 2012–2014, from Home Visiting Evidence of Effectiveness: http://homvee.acf.hhs.gov/
- Weiss CH, Murphy-Graham E, Petrosino A, Gandhi AG. The Fairy Godmother - and Her Warts: Making the Dream of Evidence-Based Policy Come True. American Journal of Evaluation. 2008;29(1):29–47. doi: 10.1177/1098214007313742. [DOI] [Google Scholar]