Abstract
An emerging literature highlights the potential for broader dissemination of evidence-based prevention programs in communities through existing state systems, such as the land grant university Extension outreach system and departments of public education and health (DOE– DPH). This exploratory study entailed surveying representatives of the national Extension system and DOE– DPH, to evaluate dissemination readiness factors, as part of a larger project on an evidence-based program delivery model called PROSPER. In addition to assessing systems’ readiness factors, differences among US regions and comparative levels of readiness between state systems were evaluated. The Extension web-based survey sample N was 958 and the DOE–DPH telephone survey N was 338, with response rates of 23 and 79 %, respectively. Extension survey results suggested only a moderate level of overall readiness nationally, with relatively higher perceived need for collaborative efforts and relatively lower perceived resource availability. There were significant regional differences on all factors, generally favoring the Northeast. Results from DOE–DPH surveys showed significantly higher levels for all readiness factors, compared with Extension systems. Overall, the findings present a mixed picture. Although there were clear challenges related to measuring readiness in complex systems, addressing currently limited dissemination resources, and devising strategies for optimizing readiness, all systems showed some readiness-related strengths.
Keywords: Dissemination, Evidence-based programs, Readiness factors, State delivery systems
Introduction
There is a clear need for reduction of youth problem behaviors and for positive youth development through broader dissemination of evidence-based prevention programs (hereafter EBPs). Results from the Centers for Disease Control and Prevention’s (CDC) annual Youth Risk Behavior Survey indicate high rates of problem behaviors that have negative social, health, and economic consequences (CDC 2011). The problem behaviors surveyed by the CDC range from substance misuse to violence to other health-risking behaviors. These behaviors inhibit positive youth development, are associated with family dysfunction, and exact a tremendous economic toll. For example, underage drinking alone was estimated to cost $68 billion annually in 2007 (National Center on Addiction and Substance Abuse 2011).
A report by the National Research Council and Institute of Medicine (NRC-IOM 2009) emphasizes that the negative results of these types of youth problem behaviors could be greatly ameliorated through broader delivery of EBPs. In this context, EBPs are defined as prevention programs tested in well-designed, methodologically-sound studies, with health outcome improvements demonstrated to be statistically and practically significant (see Flay et al. 2005). Surveys addressing the actual implementation of EBPs in many program delivery systems (e.g., public school systems, public health systems, social service systems) have shown that only small percentages of populations that could benefit from specific EBPs have the opportunity to do so (e.g., Merikangas et al. 2011; NRC– IOM 2009). The result is that EBP potential for achieving population-level impact, enhancing public health and well-being, is not being realized (Spoth et al. 2013b; Woolf 2008). This is especially true with the current scarcity of resources that typically fund EBP dissemination, such as federal and state grants. The purpose of this exploratory research was to conduct a survey-based evaluation of EBP implementation readiness in state delivery systems; it was part of a larger research project on a community-based EBP delivery system that is called PROSPER.
Potential of Extension and Its Linked State Systems for Broader EBP Dissemination
Historically, Cooperative Extension is an outreach system based in land grant universities that has been characterized as the largest informal education system in the world (Coward et al. 1986, p. 107), with reach into every state and county in the country. Moreover, translating program-related research into widespread practice is central to Extension’s mission; it has a relatively extensive program delivery infrastructure in all states (see Rogers 1995). This capacity and mission suggest considerable systems potential for dissemination and evaluation of evidence-based family and youth programming (Molgaard 1997; Spoth et al. 2004). Relevant literature has accumulated over the past two decades specifying how the Extension system offers opportunities for better translating EBPs into widespread community-based practice, especially when linked with other program or service delivery systems (e.g., Molgaard 1997; Spoth and Greenberg 2011; Spoth et al. 2015).
Reports on evidence-based programming in the Extension system (Fatsch et al. 2012; Hill and Parker 2005; Perkins et al. 2006), have underscored the potential of the Extension system for the broader translation of EBPs into community-based practice, particularly in collaboration with other systems that disseminate prevention programs (e.g., public education, public health, human services). This literature notes compelling arguments for increased Extension-assisted EBP dissemination, including: (1) fostering a higher degree of consistency between science-based programming and actual practice; (2) facilitating practitioners’ attention to the characteristics of scientifically-proven programming; and (3) enhancing scientist-practitioner collaborations.
Perhaps most important in consideration of Extension system potential for disseminating EBPs to enhance public health—especially when coordinated with education and public health systems—is the directly-relevant empirical evidence accrued from randomized controlled prevention trials. Most noteworthy in this context is a study of the PROSPER Partnership Model. The PROSPER Partnership Model is a delivery system for supporting and sustaining EBPs designed to promote positive youth behaviors and reduce negative or risky ones, as well as to improve related family functioning (Spoth et al. 2004); rigorous study supports its effectiveness and cost efficiency (e.g., see Spoth and Greenberg 2011; Spoth et al. 2013a).
This partnership model applies the existing and relatively stable base resources of land grant universities and Extension systems, as well as those of linked public school and public health systems, to the development and maintenance of community partnerships. Teams of community partners focus on delivering a family-focused and a school-based EBP in order to maximize the likelihood of producing community-level positive youth and family outcomes. The PROSPER research trial and associated studies have demonstrated: (1) community teams’ sustainability of evidence-based programming efforts for over 11 years; (2) community teams’ achievement of high recruitment rates for family EBP participation, compared to traditional approaches; (3) EBPs implemented with high levels of quality; (4) positive long-term effects for strengthening family relationships, parenting, and youth skill outcomes; (5) long-term effects for reducing youth problem behavior outcomes (both substance misuse and conduct problems); (6) reductions in negative peer influences indicated by social network analyses; and (7) cost efficiency, as compared with programming implemented outside of PROSPER partnerships, along with cost effectiveness (Spoth and Greenberg 2011; also see www.helpingkidsprosper.org).
The context for the development and conduct of the exploratory survey research reported herein was a series of projects funded by the CDC, the National Institutes of Health, and the Annie E. Casey Foundation that were aimed at developing strategies for increasing adoption of the PROSPER Partnership Model within state Extension systems, along with state agency partners (Education and Public Health) that disseminate EBPs. The funding supported a readiness survey of each state’s Extension system and companion surveys with key informants from the Departments of Education (DOE) and Public Health (DPH) in all states.
Readiness-Related Factors in EBP Dissemination
An extensive literature on organizational, community, and systems readiness has identified a number of readiness-related factors in EBP adoption, positive EBP implementation outcomes, and sustainability of EBP implementation (Chinman et al. 2005; Foster-Fishman et al. 2007; Hemmelgarn et al. 2001; Johnson et al. 2004; Ogilvie et al. 2008; Plested et al. 2006). Several recent studies highlight the critical importance of readiness assessments in prevention program support systems (e.g., Cooper et al. 2015; Flaspohler et al. 2012; Harris et al. 2012), particularly those entailing scientist-practitioner partnerships (Özdemir and Giannotta 2014). They also reveal gaps in the research on these readiness-related factors, including the need to better develop readiness measurements (Chaudoir et al. 2013; Emmons et al. 2012; Stamatakis et al. 2012). In this context, readiness has been operationally defined in various ways but commonly refers to an organizational unit’s or system’s ability to initiate and effectively implement innovative programming (see Weiner et al. 2008). Notably, despite the potential Extension has for disseminating prevention-oriented EBPs, researchers have identified a number of barriers concerning readiness within this complex system.
The readiness-related factors directly relevant to the Extension system, delineated in a growing literature (e.g., Betts et al. 1998; Dunifon et al. 2004; Fatsch et al. 2012; Hamilton et al. 2013; Hill and Parker 2005; Perkins et al. 2006), include: (1) limited financial resources and time (e.g., competing time demands); (2) perceptions that EBPs do not adequately address programming needs and that they are not necessarily superior to traditional programming; (3) inadequate Extension staff knowledge, training, and skills specific to EBP implementation, including lack of familiarity with the language and concepts of EBPs; (4) Extension staff resistance to change from their traditional programming roles (e.g., development of brief educational programming or materials in response to local community requests); and (5) difficulties in accommodating collaborations with scientists or academic departments that might be beneficial to EBP implementation and related program evaluation, particularly due to time constraints. The financial resource-related factor has become especially prominent in the last 4–5 years, as a result of shrinking federal and state budgets.
Following from the review of the literature on readiness and consideration of factors in adoption of the PROSPER Partnership Model, we focused on three key constructs: perceived need for collaboration, organizational capacity, and engagement in the programming of interest. To begin, there is an extensive literature on the general benefits of community collaborations (for a review see Foster-Fishman et al. 2001), and additional literature specifically highlights the critical role of collaborations in the community-based delivery of preventive interventions (Arthur et al. 2003; Hawkins et al. 2010; Kim et al. 2015; Roussos and Fawcett 2000; Spoth and Greenberg 2005; Wandersman et al. 2008). This literature concludes that community collaborations can be effective delivery mechanisms for prevention programming when they are focused on both community mobilization and the use of strategies grounded in prevention science. Although it has been conceptualized in varying ways, there also is a substantial body of literature suggesting that an organization’s capacity is a another key predictor of adoption and successful implementation of new practices such as prevention programming (Durlak and DuPre 2008; Elliott and Mihalic 2004; Fixsen et al. 2005; Flaspohler et al. 2008; Greenhalgh et al. 2004; Johnson et al. 2004). There is consensus that such factors as funding and human resources are key, including staff availability, skills, and training. Lastly, a smaller set of articles suggests that prior engagement in and experience with evidence-based prevention programming enhances the likelihood of adoption of newly introduced evidence-based programming efforts (e.g., Kim et al. 2015; Spoth et al. 2013b).
Gaps in the Literature and Related Research Questions
The literature review revealed substantial work on organizational, community, and systems-level readiness factors, as noted above. Within this body of work is the aforementioned literature on readiness factors in the Extension and other dissemination systems with which it may link (e.g., those related to collaboration, organizational capacity, and engagement), but many gaps remain in this literature. First, although there has been some Extension readiness-related survey research conducted in Washington and New York states, no national survey research could be found. In addition, no regional survey work was uncovered that would allow comparisons of readiness factors across Extension regions. Finally, no national readiness surveys of the dissemination systems with which Extension systems frequently link could be found. These gaps in the literature, along with research indicating the PROSPER Model’s effectiveness in disseminating EBPs, suggested the need for the surveys reported in this paper. The survey research was considered formative and exploratory, addressing three research questions mapping onto the research gaps noted. The Extension system and companion agency surveys described herein were used to measure readiness-related barriers, along with those factors identified as central to successful implementation of the PROSPER Partnership Model. The first exploratory research question concerned national and regional Extension system staff readiness for prevention programming, particularly EBPs—indicated by engagement in such programming, perceived need for relevant collaborations, level of organizational capacity, and relevant training—along with the comparative strength of these indicators of readiness. The rationale for this research question is to address the readiness-related knowledge gaps indicated in the above literature review. A second question concerned differences across the four Extension regions in terms of levels of readiness. A third exploratory question concerned the comparative levels of readiness between state DOEs–DPHs and state Extension systems. The rationale for addressing the second and third questions is as follows.
An opportunity afforded by the national Extension readiness survey concerned the prospect of examining regional differences in readiness levels. In the national Cooperative Extension System there are four geographic regions that mirror the regional structure of the US Census and include the North Central, the South, the Northeast, and the West. Each Extension region has its own association and directorship that develops a set of priorities and standards related to outreach and evaluation in each core programming area. This renders it more likely that state Extension systems within the same region will have similar practices and standards of relevance to selecting and implementing EBPs, but ones that may vary across region. Another factor that possibly could create differences across the regions in prevention-related programming is that the perceived need for EBPs might vary across regions. For example, it may be the case that regions with higher levels of youth substance misuse are more inclined to seek out evidence-based prevention programs to address this problem. For these reasons, the authors chose to examine differences in readiness constructs across the four Extension regions.
Finally, there were several interrelated reasons for surveying representatives from the Departments of Education and Public Health, in addition to Extension. Although a national survey assessing training needs of the public health workforce concerning evidenced-based decision making recently has been conducted (Jacob et al. 2014), other relevant types of readiness assessments were not found. Most importantly, the design for the PROSPER Partnership Model entails active collaboration of Extension systems with DOEs and DPHs, as potential supporters of EBP delivery. Among currently delivered EBPs, financial and other forms of support often originate in these state departments. For example, survey research on programming for youth indicates that DOE-supported public schools serve as key implementers of EBPs and that an appreciable proportion of their prevention programming consists of EBPs (Hallfors and Godette 2002; Ringwalt et al. 2009). In addition, state DPHs often assume responsibility for administering EBPs that receive federal funding (e.g., block and other grants from the Substance Abuse and Mental Health Services Agency).
Because DOEs and DPHs could be potential sources of advisory, funding, and other forms of support for implementation of the PROSPER Partnership Model, assessing state DOE and DPH readiness factor levels in parallel with Extension system readiness was considered to be a critical part of assessing overall state EBP delivery readiness. In addition, the EBP survey literature cited above suggested that the DOEs and DPHs in many states were comparatively more ready for broader EBP delivery than were Extension systems, at least based on reported rates of EBPs implemented. The project’s DOE–DPH survey provided an opportunity to evaluate that expectation.
Methods
Extension System Survey Sample
The Extension system survey targeted employees of the youth and family program areas of the Cooperative Extension Systems in land grant universities. The sampling framework was limited by existing lists of employees gathered directly from open directories provided on the universities’ websites. A total of 5072 names were found to comprise the initial pool of potential Extension respondents. In states having fewer than 100 identified staff members, all identified Extension staff members were invited to participate; in states with larger systems, 100 were randomly selected and invited to participate. The final National Extension sample pool included 4,181 individuals.
Sample participants were well educated: 68.5 % had a master’s degree or bachelor’s degree with additional coursework and 11.2 % had a terminal degree. On average, these participants had been in their current positions for 10.6 years (SD = 9.4) and employed by their state’s Extension system for an average of 13.6 years (SD = 10.3). Ninety-five percent of the sample had full-time positions. Just over three-quarters of the participants (76.8 %) were community-based educators whose primary responsibility was to deliver family and/or youth programs, 6.5 % worked at a regional level within their state, and 16.8 % worked at the state level (state and regional level positions tended to be more administrative in nature).
Extension System Survey Administration
Prior to survey administration, state Extension Directors were informed about the project and were asked to encourage participation among their staff. A competitive incentive of $2000 was offered to the states with the highest response rates within each of three size categories (small, medium, and large Extension systems). In addition, $500 was offered toward professional development or training to a randomly selected respondent in each participating Extension system. The survey was administered online via a secure webserver with a unique ID and password for each respondent. Data were collected over the course of a month. The response rate was 23 % (958 completed surveys, although data from 12 surveys were not usable). A review of the relevant literature suggested that this rate is consistent with response rates from similar studies using web-based approaches (Couper 2001; Dillman et al. 1998; Hamilton 2009).
DOE–DPH Survey Sample
The sample included DOE–DPH program administrators and implementers responsible for programs designed to prevent youth problem behaviors, particularly substance misuse. From the relatively limited pool of potential participants, 467 were identified and targeted for recruitment (aiming for a sample including four individuals from each department in each state, with approximately half representing each type of state department). Of the initial 467 potential respondents, 46 were subsequently deemed ineligible (e.g., primarily due to termination of employment or retirement), 41 refused participation, and 42 could not be reached, yielding an N of 338 (a response rate of 79 %). Approximately 87 % of the sample participants had a master’s or bachelor’s degree. On average, respondents were in their current positions for 6.9 years; about half (51 %) were in administrative positions.
DOE–DPH Survey Administration
The survey was administered via computer-assisted telephone interviewing. Depending on the availability of contact information, the respondents were first contacted via phone by a trained interviewer to either conduct or schedule the interview. A consent letter was read to all respondents at the beginning of the interview and, after obtaining the respondent’s permission to proceed, the survey was administered. Due to restrictions on monetary compensation to state employees, no incentive was offered to participants.
Survey Development and Measures
Constructs concerning readiness factors summarized in the Extension and broader literature were reviewed for purposes of constructing the survey reported in this paper. Many of the key constructs mapped onto recent publications addressing specific barriers and enablers of EBP implementation in an Extension context, including the Washington state survey conducted by Hill and Parker (2005). Constructs measured focused on readiness for a combination of prevention program implementation (particularly that involving EBPs) and related collaboration, as indicators of readiness of a PROSPER-like approach to prevention program dissemination. Measures related to these factors were adapted primarily from four sources. These sources included: Simpson’s Model of Systems Readiness (Lehman et al. 2002; Simpson 2002), Aarons’ Evidence-Based Practice Attitude Scale (Aarons 2004), the CYFAR Organizational Change Survey (Betts et al. 1998), and the PROSPER Partnership Network Community and Educator Readiness measures (PROSPER Partnership Network 2011).
Except as noted below, all measures utilized five-point Likert-type response scales, most of which assessed degree of agreement or level of importance. In all cases, lower values indicated lower levels of readiness, with a value of 3 indicating neutral or “mixed” responses. The only items that were measured differently were those addressing staff training and development; those items utilized a nominal response scale with four categories (No training/not applicable = 1, Applicable, but no training = 2, Adequate = 3, Too much = 4).
A series of factor and reliability analyses was conducted. The goal of the first principle components factor analysis was to identify broad content areas addressed by the items in the survey. The scree plot resulting from this analysis suggested that there were six factors. Four of these broader factors emerged as being most relevant to assessing readiness and were used in subsequent analyses (see Table 1). The first primary factor—state engagement in prevention programming—included 17 items (e.g., “I know where to go to find information on evidenced-based programs…,” α = .87), the second primary factor—perceived need for EBP-related collaborations—had 9 items (e.g., “Based on my perception of our statewide needs for evidence-based programs and related partnerships, we should do more to facilitate partnerships between state—) and county-level staff to support community prevention programming,” α = .90), the third factor—organizational capacity—consisted of 25 items (e.g., “Our… staff have enough time to complete assigned duties,” α = .89), and the fourth factor—perceived need for training—consisted of four items (α = .61).
Table 1.
Factor/subscale | No. of items | Alpha | Lower scores | Higher scores |
---|---|---|---|---|
State engagement in prevention programming | 17 (4) | .87 (.66) | 83 % (8 %) | 17 % (92 %) |
Support for prevention | 4 (1) | .82 (NA) | 26 % (18 %) | 74 % (82 %) |
Knowledge of EBPs | 3 (3) | .71 (.69) | 44 % (9 %) | 56 % (91 %) |
Commitment to evaluation | 4 (0) | .85 (NA) | 74 % (NA) | 26 % (NA) |
Perceived need for EBP-related collaborations | 9 (7) | .90 (.86) | 23 % (5 %) | 77 % (95 %) |
Organizational capacity | 25 (14) | .89 (.83) | 62 % (24 %) | 38 % (76 %) |
Perceived resources | 4 (4) | .72 (.68) | 88 % (55 %) | 12 % (45 %) |
Collaboration experience | 5 (4) | .76 (.71) | 45 % (9 %) | 55 % (91 %) |
System openness to change | 4 (1) | .74 (NA) | 34 % (23 %) | 66 % (77 %) |
Staff training and development | 4 (0) | .61 (NA) | 30 % (NA) | 70 % (NA) |
DOE–DPH values in parentheses. Factor/subscale scores of below 3.5 (based on a scale of 1 to 5) were categorized as lower scores; 3.5 and above were categorized as higher scores. For staff training and development, the “No training/not applicable” category was excluded from analyses and the remaining response categories (scored from 2 to 4) were utilized as a Likert-type measure
Following the identification of the four primary factors, an additional series of factor analyses was conducted to identify sets of items comprising subscales within the primary factors expected to be of most relevance to successful adoption of the PROSPER Partnership Model. Scree plots suggested three subscales for the state engagement in prevention programming factor and three subscales for the organizational capacity factor (not all items from the primary factors loaded onto the identified subscales). There were no subscales identified for either the perceived need for EBP-related collaboration or the staff training and development factors (see Table 1). Reliability coefficients ranged from .71 to .85 across the six subscales.
The DOE–DPH survey development proceeded through a parallel process. Due to the similarities in items between the Extension and DOE–DPH assessments, the initial principle component factor analysis resulted in corresponding primary factors, with the exception of staff training and development, since it was not included in the DOE–DPH survey. The follow-up factor analyses conducted for each factor suggested that there were two subscales for the engagement in prevention programming factor, three subscales for the organizational capacity factor (not all organizational capacity factor items loaded onto its subscales), and no subscales for the perceived need for EBP-related collaborations factor. See Table 1 for more detail on DOE–DPH factors and subscales.
Analyses
Descriptive data analyses were performed to answer the first research question concerning readiness scores at the national and regional levels for Extension and DOE–DPH. McNemar Chi Square analyses then were conducted to assess differences in proportions of respondents with lower- or higher-level readiness among the primary readiness factors. In order to address the second research question concerning regional differences across the readiness factors, a series of one-way ANOVAs and post hoc comparisons were conducted, as summarized in the results section. Finally, t tests were conducted to address the third research question comparing readiness factor differences between state Extension systems and DOEs–DPHs.
Results
Extension System Readiness Factors
State Engagement in Prevention Programming
The national mean score on the state engagement in prevention programming factor scale was 2.93, which approximates the midpoint on the Likert-type scales in the survey and suggests relative neutrality or mixed perceptions concerning the level of readiness regarding this factor. For the purpose of conducting McNemar Chi Square analyses, a score of 3.5 was used to establish a cut-off point, above which scores suggest higher levels of readiness (Likert responses 4 and 5 indicate higher ratings on each of the specific readiness items). The McNemar Chi Square analyses indicated that the proportion of higher scores on this readiness factor was significantly smaller than the proportion of higher scores on the organizational capacity factor (χ2 = 135.19, p < .001) and the perceived need for EBP-related collaboration factor (χ2 = 526.41, p <.001, see Table 2).
Table 2.
N = [min, max] | National |
Northeast |
Central |
South |
West |
|||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
[925, 946] |
[143, 146] |
[278, 282] |
[284, 292] |
[220, 226] |
ANOVA |
|||||||
Mean | SD | Mean | SD | Mean | SD | Mean | SD | Mean | SD | F | p | |
State engagement in prevention programming | 2.93 | 0.59 | 3.05a | 0.64 | 2.85b | 0.57 | 3.02a | 0.59 | 2.83b | 0.57 | 8.131 | 0.000 |
Support for prevention | 3.81 | 0.75 | 3.89ab | 0.76 | 3.71ac | 0.73 | 3.96b | 0.71 | 3.68c | 0.78 | 8.658 | 0.000 |
Knowledge of EBPs | 3.58 | 0.79 | 3.66a | 0.83 | 3.57a | 0.81 | 3.58a | 0.80 | 3.51a | 0.75 | 1.161 | 0.324 |
Commitment to evaluation | 2.86 | 0.87 | 3.00a | 0.90 | 2.75b | 0.82 | 3.03a | 0.86 | 2.70b | 0.90 | 8.879 | 0.000 |
Perceived need for EBP collaborations | 3.89 | 0.58 | 4.08a | 0.55 | 3.83b | 0.56 | 3.86b | 0.58 | 3.89b | 0.59 | 6.474 | 0.000 |
Organizational capacity | 3.34 | 0.50 | 3.40a | 0.48 | 3.34ab | 0.51 | 3.37a | 0.49 | 3.24b | 0.52 | 4.005 | 0.008 |
Perceived resources | 2.48 | 0.73 | 2.53ab | 0.77 | 2.55a | 0.74 | 2.48ab | 0.70 | 2.37b | 0.73 | 2.743 | 0.042 |
Collaboration experience | 3.57 | 0.63 | 3.62ab | 0.61 | 3.53ac | 0.62 | 3.72b | 0.61 | 3.39c | 0.64 | 12.720 | 0.000 |
System openness to change | 3.61 | 0.70 | 3.69a | 0.70 | 3.60a | 0.69 | 3.59a | 0.69 | 3.58a | 0.72 | 0.880 | 0.451 |
Staff training and development | 2.62 | 0.36 | 2.59a | 0.33 | 2.61a | 0.36 | 2.71b | 0.34 | 2.54a | 0.38 | 11.111 | 0.000 |
Means in the same row that share subscripts do not differ at p <.05 (Tukey honestly significant difference [HSD] comparison). For example, consider Support for Prevention: the mean for the Northeast region (subscripts a and b) is significantly different than the mean for the West (subscript c), but not significantly different than the means for the Central (subscripts a and c) or South (subscript b) regions; the South region mean (subscript b) is significantly different than both the Central (subscripts a and c) and West region means (subscript c)
There were significant regional differences on this factor overall (F = 8.131, p < .001), as well as on the support for prevention and commitment to evaluation subscales. For the overall factor, Tukey post hoc comparisons of the four Extension regions indicated that the mean scores for the Northeast (3.05) and the South (3.02) regions were significantly higher than the mean scores for the Central (2.85) and West (2.83; see Table 2).
Subscale scores generally were consistent with the pattern for the overall state engagement factor, with the Northeast and the South regions scoring higher than the national average, and the Central and West scoring lower (see Table 2). Significant regional differences were found on the support for prevention (F = 8.658, p < .001) and commitment to evaluation (F = 8.879, p < .001) subscales (see Table 2). For the support for prevention subscale, the mean scores for the Northeast and South regions were both significantly higher than the mean score for the West, with the South region mean also exceeding that of the Central region. For the commitment to evaluation subscale, the mean scores for the Northeast and South were significantly higher than the mean scores for the Central and West. There were no significant regional differences for the knowledge of EBPs subscale.
Perceived Need for EBP-Related Collaboration
The national mean scale score of perceived need for EBP-related collaboration was 3.89, above the scale midpoint of 3.0 and highest among the factors assessed on a five-point scale. There were significant regional differences on this factor score (F = 6.474, p <.001; see Table 2), with the Northeast region producing the highest mean score on the overall factor (4.08), indicating a relatively higher level of perceived interest in and need for increasing and improving collaborative efforts than in the other regions.
Organizational Capacity
For the overall organizational capacity scale, the national mean score was 3.34, slightly above the scale midpoint (see Table 1). McNemar Chi Square analysis indicated that the proportion of high scores on this readiness factor was significantly smaller than the proportion of high scores on the perceived need for EBP-related collaboration factor (χ2 = 263.94, p <.001). Regional differences for this factor scale also were significant (F = 4.005, p = .008). The Northeast and the South scored significantly higher than the West (see Table 2). Notably, the subscale focusing on perceived resources produced the lowest subscale scores, with the national average of 2.48 and all regions falling into a relatively lower range (see Table 2). A significant regional difference also was found for that subscale (F = 2.743, p = .042), with the mean score of the Central region exceeding that of the West (see Table 2). Significant regional differences also were found on the collaboration experience subscale (F = 12.72, p <.001); mean scores for the Northeast and South regions were significantly higher than the mean score for the West, with the South region mean also significantly higher than the Central region mean (see Table 2). There were no significant regional differences on the system openness to change subscale.
Staff Training and Development
Concerning the staff training and development factor, the national and all four regional scores registered in the no training to adequate training range, or below the “adequate” level, on average. Scores ranged from 2.54 (West) to 2.71 (South). Regional differences were statistically significant (F = 11.111; p < .001), with the South region producing a mean significantly higher than means for the other regions (see Table 2).
Parallel DOE–DPH Readiness Factor Scores
The DOE–DPH sample means were generally high across the assessed primary readiness factors at both the national and regional levels, and particularly so for state engagement in prevention programming and EBP-related collaborations factors, for which mean scores exceeded 4 (see Table 3). The McNemar Chi Square tests indicated that the proportion of higher scores for the state engagement in prevention programming and perceived need for EBP-related collaborations factors were significantly greater than the proportion of higher scores for the organizational capacity factor (χ2 = 42.61, p <.001), and the proportion of higher scores on the organizational capacity factor (χ2 = 45.62, p < .001 and χ2 = 1.78, p = .18, respectively). Notably, DOE–DPH respondents scored significantly higher (all p’s < .001) on all factors than did Extension system respondents (see Table 4). In addition, relative to Extension system survey results, variations in mean scores across regions tended to be somewhat smaller, with no significant regional differences detected (see Table 3).
Table 3.
N = [min, max] | National |
Northeast |
Central |
South |
West |
|||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
[334, 338] |
[74, 75] |
[82, 83] |
[90, 91] |
[88, 89] |
ANOVA |
|||||||
Mean | SD | Mean | SD | Mean | SD | Mean | SD | Mean | SD | F | p | |
State engagement in prevention programming | 4.33 | 0.56 | 4.36 | 0.49 | 4.36 | 0.53 | 4.24 | 0.61 | 4.36 | 0.58 | 1.095 | .351 |
Support for prevention | 4.19 | 0.88 | 4.18 | 0.82 | 4.13 | 0.91 | 4.14 | 0.92 | 4.30 | 0.87 | 0.613 | .607 |
Knowledge of EBPs | 4.37 | 0.60 | 4.42 | 0.51 | 4.43 | 0.56 | 4.27 | 0.64 | 4.39 | 0.64 | 1.402 | .242 |
Perceived need for EBP-related collaborations | 4.35 | 0.50 | 4.23 | 0.57 | 4.34 | 0.46 | 4.40 | 0.52 | 4.40 | 0.45 | 1.996 | .114 |
Organizational capacity | 3.78 | 0.50 | 3.70 | 0.53 | 3.78 | 0.51 | 3.82 | 0.54 | 3.82 | 0.43 | 1.028 | .380 |
Perceived resources | 3.22 | 0.74 | 3.11 | 0.72 | 3.16 | 0.80 | 3.33 | 0.74 | 3.24 | 0.70 | 1.388 | .246 |
Collaboration experience | 4.13 | 0.61 | 4.06 | 0.62 | 4.13 | 0.64 | 4.14 | 0.65 | 4.17 | 0.53 | 0.438 | .726 |
System openness to change | 3.99 | 0.84 | 3.91 | 0.87 | 3.95 | 0.81 | 3.99 | 0.85 | 4.10 | 0.81 | 0.831 | .478 |
Table 4.
DOE–DPH |
CES |
||||||
---|---|---|---|---|---|---|---|
Mean | SD | Mean | SD | Mean difference | t (df) | p value | |
State engagement in prevention programming | 4.33 | 0.56 | 3.58 | 0.75 | 0.75 | 19.27 (784.89)a | .000 |
Support for prevention | 4.19 | 0.88 | 3.59 | 0.95 | 0.60 | 10.50 (627.58)a | .000 |
Knowledge of EBPs | 4.37 | 0.60 | 3.58 | 0.79 | 0.80 | 19.26 (782.73)a | .000 |
Perceived need for EBP-related collaborations | 4.35 | 0.50 | 3.87 | 0.59 | 0.47 | 13.09 (1279) | .000 |
Organizational capacity | 3.78 | 0.50 | 3.30 | 0.50 | 0.48 | 15.25 (1282) | .000 |
Perceived resources | 3.22 | 0.74 | 2.48 | 0.73 | 0.73 | 15.70 (1282) | .000 |
Collaboration experience | 4.13 | 0.61 | 3.68 | 0.64 | 0.45 | 11.30 (1280) | .000 |
System openness to change | 3.99 | 0.84 | 3.56 | 0.96 | 0.43 | 7.83 (679.21)a | .000 |
For this analysis CES factors and subscales were reduced to exactly correspond to the DOE–DPH factors and subscales, which included a subset of the CES items. The shortened versions of the CES factors and subscales were comparable to the full versions in terms of internal consistency
t tests were conducted under the assumption of unequal variances across the two groups
Discussion
Overview of Findings
The Extension system survey results suggested that, in general, the levels of readiness for prevention-oriented EBP implementation were moderate, across state systems. Relatively stronger readiness ratings on the perceived need for EBP-related collaborations were observed, although the score derivation from ordinal scales and the varying distributional properties of the different readiness factor scores constrain precise comparisons among factor scores. That said, the weakest readiness subscale scores concerned resources for EBP implementation and, relatedly, sub-optimal readiness also was indicated concerning staff training and development. There were significant regional differences on all primary readiness factors, generally favoring the Northeast region, with the West region showing the lowest scores on three of the four factors. DOE–DPH representatives indicated significantly stronger readiness, compared with representatives from state Extension systems, on all factors, as well as showing somewhat more inter-regional consistency in levels of readiness (no significant differences across the regions corresponding to those of Extension were found).
The literature review highlighted a number of barriers to Extension system readiness (e.g., Hill and Parker 2005; Fatsch et al. 2012; Hamilton et al. 2013) that, generally speaking, comport with the survey findings. Although some of the barriers noted in the literature were not specifically measured (e.g., familiarity with the language and concepts of EBPs, and related evidentiary standards), others—such as inadequate staff training, resistance to change, competing time demands, and limited financial resources—are consistent with the findings from the present national survey study. Another parallel with the literature worthy of note is the relatively lower level of commitment to program evaluation, a barrier that was indicated in connection with limited collaboration with academic departments. To place this finding in context, recent survey research conducted with New York Extension educators (Hamilton et al. 2013) underscored how competing time demands is the greatest barrier to research involvement and how such involvement is especially limited in the youth programming area. However, consistent with a “mixed picture,” it also is noteworthy that a key subset of the readiness-related strengths of the Extension system (e.g., concerning stronger perceptions of the need for collaboration in general) suggested by the literature reviewed were measured and, for the most part, supported.
Regional Differences in Readiness
As reviewed in the introduction, there are a number of reasons it was expected that there would be Extension system regional differences in readiness, including the varying region-based programming priorities, standards and practices. Regional differences in readiness were confirmed but the reasons for the specific pattern of differences observed are not entirely clear. As noted, on most of the primary readiness constructs, the Northeast region had the highest readiness scores. Perhaps some differences (e.g., commitment to evaluation) are related to proportions of Extension positions that entail faculty appointments in this region, if those with such appointments are more invested in EBPs and program evaluation. In addition, 4-H programming in the Northeast region is more likely to involve school-based programs and non-traditional 4-H programming than it is in the West, for example, where it is often is linked to more traditional, club-based programming (D. Perkins, personal communication, February 2014).
In this context, it is interesting that, in contrast with results from the Extension system survey, there were no significant regional differences in the DOE–DPH survey. It is difficult to know how to explain the relative lack of differences. Although lower statistical power resulting from the smaller sample of the DOE–DPH representatives relative to the Extension sample likely played a role, it may also relate, in part, to the decentralized organizational structure in the Extension system (see Rogers 1995). In this regard, education and public health mandates and requirements for DOEs and DPHs at the Federal level might contribute to greater similarities in the measured readiness factors across states and regions. If decentralization were relatively greater for Extension than DOE or DPH, it would allow for relatively more variability in state system functioning that is sensitive to geographic, economic, cultural and other conditions (e.g., number of suburban/urban areas) unique to the regions. Moreover, the level of Extension staffing resources varies by region, with the West region having the lowest number of youth and family educators. Higher numbers of staff in other regions may influence readiness, both directly and indirectly (e.g., allowing for more EBP-related collaborations, in addition to more staff to implement EBPs).
Comparison of Extension and Education/Public Health Readiness
The DOE–DPH survey indicated that these organizations have relatively strong scores across all readiness factor scales and subscales, showing significantly higher scores than did Extension systems. Methodological considerations discussed below render it particularly difficult to draw any definitive conclusions about the reasons for these differences. Nonetheless, the pattern of findings is consistent with the influence of policies promulgated by the federal agencies that provide funding for state DOEs–DPHs and have increasingly emphasized the need for broader use of funding for EBP implementation (see Spoth et al. 2013b). This policy influence, partially exerted in connection with funding for state programming, may be stronger than it is in the case of the USDA program-related funding that partially supports state Extension systems. In this connection, a recent report (Shapiro et al. 2015) highlights the importance of organizational linkages in the dissemination of EBPs. Considering DOE–DPH missions and the related Federal policy support, existing organizational linkages focusing on prevention programming might be more prevalent in those two departments, as compared with the Extension system.
Salient Findings on Collaborations and Resource-Related Capacity
Study surveys were conducted in the context of the economic downturn that began in 2007–2008. The authors had seen or heard numerous media reports of state budgetary reductions at the time. In this context it was not unexpected that resource-related scales showed relatively lower scores, across study surveys.
A kind of validation of the impact of resource and related time constraints was very salient in subsequent phases of the project in which the reported surveys were an early research activity. That is, key state stakeholders who subsequently learned about the prospect of supporting broader EBP implementation in their state through PROSPER indicated high levels of readiness on factors similar to those measured in the surveys, but were greatly constrained by budget cuts and other resource limits. The impact of those constraints was underscored by state stakeholder reactions to possible economic benefits associated with EBP implementation (comparative cost efficiency, cost effectiveness and cost benefits). These reactions suggested considerable readiness for EBP implementation projects, but not sufficient enough to supersede the resource constraints. In the Extension case, this is especially noteworthy in light of the literature on the stated priority of efficient use of resources (e.g., Dunifon et al. 2004; Hill and Parker 2005). That is, the potential of a PROSPER-like model for improving the cost efficiency of programming cannot be realized without initial resource investments that are forestalled by the immediate lack of resources.
Another interesting pattern of findings concerns perceived need for EBP-related collaborations. Across state Extension systems and DOEs–DPHs, this readiness factor showed relatively higher scores. This finding bodes well for broader preventive EBP dissemination, at least in some respects. It is interesting, however, to place the pattern of findings in the context of the literature on EBP-related collaboration in Extension. That is, while positive Extension staff attitudes toward collaboration in general are highlighted in the literature; it also is noted that collaborations with academic departments and with individual researchers on evaluation projects have not necessarily been readily accommodated (Hamilton et al. 2013; Hill and Parker 2005). This type of evaluation-specific collaboration is encouraged in federal-level policy regarding prevention program implementation; it also is integral to EBP delivery models like PROSPER. From this perspective it is noteworthy that commitment to evaluation also had relatively lower scores in the Extension system survey, consistent with evaluation-related collaboration barriers noted in the general literature and with earlier state Extension system surveys (Hamilton et al. 2013; Hill and Parker 2005).
Limitations
The literature reviewed emphasizes a number of limitations with readiness measurement, including the need for briefer, theory-based, more user-friendly measures demonstrating stronger psychometrics (Chaudoir et al. 2013; Emmons et al. 2012; Stamatakis et al. 2012). These and other measurement limitations and challenges are especially salient when addressing prevention programming at the systems level. This survey study highlighted such challenges, particularly concerning EBP implementation supported through the complex, dynamic, multi-leveled organizations surveyed.
It is important to note that there were no existing measures specifically designed to evaluate the readiness of an Extension system or a DOE–DPH to adopt and implement the PROSPER Partnership Model. In addition to their dissemination-related importance in the literature summarized in the introduction, the measures used for this study were selected because they were related to key components of the PROSPER Model. Higher scores on these indicators were expected to reflect higher levels of readiness for successful PROSPER Model implementation. In order to answer specific questions about the PROSPER Model, the respondents would have needed more Model detail and this was not feasible as part of the reported research endeavor. Thus, we adapted existing measures that were determined to map onto the key components of the PROSPER Model, to serve as proxy indicators for readiness to adopt and successfully implement the Model. The factors that emerged exhibited reasonable reliability scores, but the validity of these measures as they relate to readiness for PROSPER Model implementation needs to be determined in future studies.
Finally, given the reality of complex, multi-level organizations like those surveyed, it is difficult to assess the organization’s readiness on a global scale. In this study, representatives from all levels (i.e., community, regional, and state) within the Extension system were surveyed, but there was no viable way to account for potential differences in perceived readiness across these different levels, given constraints of the current survey research. It is possible that staff working at the community level may have different ideas about some of the factors being studied than those who work at a regional- or state-level within the system, such as capacity and the need for collaborations. Items related to knowledge of EBPs and commitment to evaluation might receive higher scores among those working at the state-level who have more contact with university researchers and the scientific community.
Given the size of the sample that was targeted for the survey of state Extension systems, a web-based survey approach was the only viable method to collect these data. Albeit typical, response rates for the Extension system web-based survey indicate a large percentage of non-respondents. Since we do not know how similar non-respondents are to the respondents, caution should be taken when drawing conclusions from the results. In this connection, given that the DOE–DPH representatives were contacted for phone interviews, their response rates were much higher than the Extension-based respondents who were sent a survey invitation via email. However, DOE– DPH respondents were only asked a subset of the items that the Extension-based respondents were, so the factors and subscales for this sample were based on fewer items. And finally, the DOE–DPH respondents were more likely to have administrative roles and to be located at the state level, as compared with Extension respondents who were mostly located at the community level.
Conclusions and Implications
Overall, the findings present a mixed picture of readiness for broader EBP dissemination in Extension systems and linked state education and public health systems. Specifically regarding the Extension system, at one and the same time survey results underscore readiness-related strengths but, also, highlight challenges related to existing levels of readiness and, especially, strategies for optimizing readiness.
The critically important challenge of limited training, financial and other resources to support prospective EBP implementers in their respective organizations is particularly salient. In the context of the aforementioned negative effects of the economic downturn, with its concomitant constraints on state and federal budgets, it is noteworthy that literature reviews highlight how EBP dissemination support systems are underdeveloped, underfinanced and under-researched (e.g., Kerner et al. 2005; Spoth et al. 2013b; Wandersman et al. 2012). A related implication is the need for innovative funding mechanisms for EBP dissemination support systems, including their readiness assessment and enhancement components, such as has been recently recommended by the Institute of Medicine (IOM–NRC 2014) and funders (e.g., Langford et al. 2012). It is especially important to conduct further research on readiness measures and strategies for readiness enhancements in existing dissemination systems like Extension, DOE, and DPH, in order to better realize their EBP dissemination potential. Further research using the data sets from the present study entails amore in-depth evaluation of organization management practices (Chilenski et al. 2015) and of differential levels of readiness among Extension-based educators in different program areas (Perkins et al. 2014); they represent steps in addressing the limited research to date.
In this vein, it also is important to note that many findings did suggest the potential of the surveyed systems for enhanced dissemination of EBPs to improve their public health impact, especially when working in combination. The fact that DOE–DPH survey respondents scored significantly higher on all readiness factors and subscales than state Extension system respondents suggests that DOEs and DPHs can be valuable partners for Extension systems that are interested in pursuing prevention programming. The relatively weaker readiness in state Extension systems notwithstanding, findings such as those from the PROSPER prevention trial project highlight the system’s potential for enhancing public health through broader EBP implementation, indicating related system strengths, such as outreach capacities, connections to well-resourced educational organizations, and commitment to the translation of research to practice.
Acknowledgments
Work for this paper was supported by the Centers for Disease Control and Prevention (R18 DP002279), the National Institute on Drug Abuse (RC2 DA028879 and R01 DA013709), and the Annie E. Casey Foundation.
References
- Aarons GA. Mental health provider attitudes toward adoption of evidence-based practice: The Evidence-Based Practice Attitude Scale (EBPAS) Mental Health Services Research. 2004;6(2):61–74. doi: 10.1023/b:mhsr.0000024351.12294.65. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Arthur MW, Ayers CD, Graham KA, Hawkins JD. Mobilizing communities to reduce risks for drug abuse: A comparison of two strategies. In: Bukoski WJ, Sloboda Z, editors. Handbook of drug abuse prevention. Theory, science and practice. New York: Kluwer; 2003. pp. 129–144. [Google Scholar]
- Betts SC, Marczak MS, Peterson DJ, Sewell M, Lipinski J. Tucson, AZ: The University of Arizona, Institute for Children, Youth and Families; 1998. Cooperative extension’s capacity to support programs for children, youth & families at risk: National results of the Organizational Change Survey. http://ag.arizona.edu/sfcs/cyfernet/cyfar/. [Google Scholar]
- Centers for Disease Control and Prevention. Youth risk behavior surveillance system: 2011 National overview. 2011 http://www.cdc.gov/healthyyouth/yrbs/pdf/us_overview_yrbs.pdf.
- Chaudoir SR, Dugan AG, Barr CH. Measuring factors affecting implementation of health innovations: A systematic review of structural, organizational, provider, patient, and innovation level measures. Implementation Science. 2013;8(1):22. doi: 10.1186/1748-5908-8-22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chilenski S, Olson JR, Schulte JA, Perkins D, Spoth R. A multi-level examination of how organizational culture relates to readiness to implement prevention and evidence-based programming in community settings. Evaluation & Program Planning. 2015;48:63–74. doi: 10.1016/j.evalprogplan.2014.10.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chinman M, Hannah G, Wandersman A, Ebener P, Hunter SB, Imm P, Sheldon J. Developing a community science research agenda for building community capacity for effective preventive interventions. American Journal of Community Psychology. 2005;35:143–157. doi: 10.1007/s10464-005-3390-6. [DOI] [PubMed] [Google Scholar]
- Cooper BR, Bumbarger BK, Moore JE. Sustaining evidence-based prevention programs: Correlates in a large-scale dissemination initiative. Prevention Science. 2015;16:145–157. doi: 10.1007/s11121-013-0427-1. [DOI] [PubMed] [Google Scholar]
- Couper MP. Web surveys: The questionnaire design challenge. In Proceedings of the 53rd session of the ISI. 2001 http://isi.cbs.nl/iamamember/CD2/pdf/263.PDF.
- Coward RT, Van Horne JE, Jackson RW. The cooperative extension service: An underused resource for rural primary prevention. In: Murray JD, Keller PA, editors. Innovations in rural community mental health. Mansfield, PA: Rural Services; 1986. pp. 105–120. [Google Scholar]
- Dillman DA, Tortora RD, Bowker D. Technical Report. Pullman, Washington: Washington State University; 1998. Principles for constructing web surveys. Social and Economic Sciences Research Center; pp. 98–50. [Google Scholar]
- Dunifon R, Duttweiler M, Pillemer K, Tobias D, Trochim WMK. Evidence-based extension. Journal of Extension. 2004;42(2) www.joe.org/joe/2004april/a2.shtml. [Google Scholar]
- Durlak JA, DuPre E. Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology. 2008;41:327–350. doi: 10.1007/s10464-008-9165-0. [DOI] [PubMed] [Google Scholar]
- Elliott DS, Mihalic S. Issues in disseminating and replicating effective prevention programs. Prevention Science. 2004;5(1):47–53. doi: 10.1023/b:prev.0000013981.28071.52. [DOI] [PubMed] [Google Scholar]
- Emmons KM, Weiner B, Fernandez ME, Tu SP. Systems antecedents for dissemination and implementation a review and analysis of measures. Health Education & Behavior. 2012;39(1):87–105. doi: 10.1177/1090198111409748. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fatsch RJ, MacPhee D, Boyer LK. Evidence-based programming: What is a process an extension agent can use to evaluate a program’s effectiveness? Journal of Extension. 2012;50(5) http://www.joe.org/joe/2012october/. [Google Scholar]
- Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation research: A synthesis of the literature (FMHI Pub. No. 231) Tampa, FL: The National Implementation Research Network, University of South Florida; 2005. [Google Scholar]
- Flaspohler P, Duffy J, Wandersman A, Stillman L, Maras M. Unpacking capacity: The intersection of research to practice and community centered models. American Journal of Community Psychology. 2008;41(3–4):182–196. doi: 10.1007/s10464-008-9162-3. [DOI] [PubMed] [Google Scholar]
- Flaspohler PD, Meehan C, Maras MA, Keller KE. Ready, willing, and able: Developing a support system to promote implementation of school-based prevention programs. American Journal of Community Psychology. 2012;50(3–4):428–444. doi: 10.1007/s10464-012-9520-z. [DOI] [PubMed] [Google Scholar]
- Flay BR, Biglan A, Boruch RF, Castro FG, Gottfredson D, Kellam S. Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science. 2005;6(3):151–175. doi: 10.1007/s11121-005-5553-y. [DOI] [PubMed] [Google Scholar]
- Foster-Fishman PG, Berkowitz SL, Lounsbury DW, Jacobson S, Allen NA. Building collaborative capacity in community coalitions: A review and integrative framework. American Journal of Community Psychology. 2001;29(2):241–261. doi: 10.1023/A:1010378613583. [DOI] [PubMed] [Google Scholar]
- Foster-Fishman PG, Cantillon D, Pierce SJ, Van Egeren LA. Building an active citizenry: The role of neighborhood problems, readiness, and capacity for change. American Journal of Community Psychology. 2007;39:91–106. doi: 10.1007/s10464-007-9097-0. [DOI] [PubMed] [Google Scholar]
- Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: Systematic review and recommendations. The Milbank Quarterly. 2004;82:581–629. doi: 10.1111/j.0887-378X.2004.00325.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hallfors D, Godette D. Will the ‘principles of effectiveness’ improve prevention practice? Early findings from a diffusion study. Health Education Research. 2002;17(4):461–470. doi: 10.1093/her/17.4.461. [DOI] [PubMed] [Google Scholar]
- Hamilton MB. Online survey response rates and times. 2009 Retrieved 12/12/13 from http://www.supersurvey.com/papers/supersurvey_white_paper_response_rates.pdf.
- Hamilton SF, Chen EK, Pillemer K, Meador RH. Research use by cooperative extension educators in New York State. Journal of Extension. 2013;51(3) Article 3FEA2. http://www.joe.org/joe/2013june/a2.php. [Google Scholar]
- Harris JR, Cheadle A, Hannon PA, Lichiello P, Forehand M, Mahoney E, et al. A framework for disseminating evidence-based health promotion practices. Preventing Chronic Disease. 2012;9:110081. [PMC free article] [PubMed] [Google Scholar]
- Hawkins JD, Shapiro VB, Fagan AA. Disseminating effective community prevention practice: Opportunities for social work education. Research on Social Work Practice. 2010;20:518–527. doi: 10.1177/1049731509359919. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hemmelgarn AL, Glisson C, Dukes D. Emergency room culture and the emotional support component of family-centered care. Children’s Health Care. 2001;30:93–110. [Google Scholar]
- Hill LG, Parker LA. Extension as a delivery system for prevention programming: Capacity, barriers, and opportunities. Journal of Extension. 2005;43(1) http://www.joe.org/joe/2005february/a1.php. [Google Scholar]
- Institute of Medicine (IOM) and National Research Council (NRC) Strategies for scaling effective family-focused preventive interventions to promote children’s cognitive, affective, and behavioral health: Workshop summary. Washington, DC: The National Academies Press; 2014. [PubMed] [Google Scholar]
- Jacob RR, Baker EA, Allen P, Dodson EA, Duggan K, Fields R, et al. Training needs and supports for evidence-based decision making among the public health workforce in the United States. BMC Health Services Research. 2014;14(1):564. doi: 10.1186/s12913-014-0564-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Johnson K, Hays C, Center H, Daley C. Building capacity and sustainable prevention innovations: A sustainability planning model. Evaluation and Program Planning. 2004;27:135–149. [Google Scholar]
- Kerner J, Rimer B, Emmons K. Dissemination research and research dissemination: How can we close the gap? Health Psychology. 2005;24:443–446. doi: 10.1037/0278-6133.24.5.443. [DOI] [PubMed] [Google Scholar]
- Kim BKE, Gilman AB, Hawkins JD. School- and community-based preventive interventions during adolescence: Preventing delinquency through science-guided collective action. In: Morizot J, Kazemian L, editors. The development of criminal and antisocial behavior: Theoretical foundations and practical applications. Berlin: Springer; 2015. [Google Scholar]
- Langford BH, Flynn-Khan M, English K, Grimm G, Taylor K. Evidence2Success, making wise investments in children’s futures: Financing strategies and structures. Baltimore: The Annie E. Casey Foundation; 2012. [Google Scholar]
- Lehman WEK, Greener JM, Simpson DD. Assessing organizational readiness for change. Journal of Substance Abuse Treatment. 2002;22:197–209. doi: 10.1016/s0740-5472(02)00233-7. [DOI] [PubMed] [Google Scholar]
- Merikangas KR, He J, Burstein M, Swendsen J, Avenevoli S, Case B, et al. Service utilization for lifetime mental disorders in US adolescents: Results of the National Comorbidity Survey-Adolescent Supplement (NCS-A) Journal of the American Academy of Child and Adolescent Psychiatry. 2011;50(1):32–45. doi: 10.1016/j.jaac.2010.10.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Molgaard V. The Extension Service as key mechanism for research and services delivery for prevention of mental health disorders in rural areas. American Journal of Community Psychology. 1997;25(4):515–544. doi: 10.1023/a:1024611706598. [DOI] [PubMed] [Google Scholar]
- National Center on Addiction and Substance Abuse (CASA) at Columbia University. Adolescent substance use: America’s #1 public health problem. New York: CASA; 2011. http://www.casacolumbia.org/addiction-research/reports/adolescent-substance-use. [Google Scholar]
- National Research Council and Institute of Medicine . Preventing mental, emotional, and behavioral disorders among young people: progress and possibilities. In: O’Connell ME, Boat TKE, editors. Committee on the prevention of mental disorders and substance abuse among children, youth, and young adults: Research advances and promising interventions. Washington DC: The National Academies Press; 2009. [PubMed] [Google Scholar]
- Ogilvie KA, Moore RS, Ogilvie DC, Johnson KW, Collins DA, Shamblen SR. Changing community readiness to prevent the abuse of inhalants and other harmful legal products in Alaska. Journal of Community Health. 2008;33(4):248–258. doi: 10.1007/s10900-008-9087-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Özdemir M, Giannotta F. Improving dissemination of evidence-based programs through researcher-practitioner collaboration. New Directions for Youth Development. 2014;141:107–116. doi: 10.1002/yd.20090. [DOI] [PubMed] [Google Scholar]
- Perkins D, Meyer-Chilenski SM, Olson JR, Mincemoyer Spoth R. Knowledge, attitudes, and commitment towards evidence-based prevention programs: Differences across Family and Consumer Sciences and 4-H Youth Development Educators. Journal of Extension. 2014;52:3. http://www.joe.org/joe/2014june/a6.php. [PMC free article] [PubMed] [Google Scholar]
- Perkins D, Mincemoyer C, Lillehoj C. Extension educators’ perception of community readiness, knowledge of youth prevention science, and experience with collaboration. Journal of Family and Consumer Science. 2006;98(4):20–26. [PMC free article] [PubMed] [Google Scholar]
- Plested BA, Edwards RW, Jumper-Thurman P. Community readiness: A handbook for successful change. Fort Collins, CO: Tri-Ethnic Center for Prevention Research; 2006. http://triethniccenter.colostate.edu/CRhandbookcopy.htm. [Google Scholar]
- PROSPER Partnership Network. Technical report for national survey of state extension systems. 2011 [Google Scholar]
- Ringwalt C, Vincus A, Hanley S, Ennett S, Bowling J, Rohrbach L. The prevalence of evidence-based drug use prevention curricula in U.S. Middle Schools in 2005. Prevention Science. 2009;10(1):33–40. doi: 10.1007/s11121-008-0112-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rogers EM. Diffusion of innovations. 4th ed. New York: Free Press; 1995. [Google Scholar]
- Roussos ST, Fawcett SB. A review of collaborative partnerships as a strategy for improving community health. Annual Review of Public Health. 2000;21:369–402. doi: 10.1146/annurev.publhealth.21.1.369. [DOI] [PubMed] [Google Scholar]
- Shapiro VB, Oesterle S, Hawkins JD. Relating coalition capacity to the adoption of science-based prevention in communities: Evidence from a randomized trial of communities that care. American Journal of Community Psychology. 2015;55(1–2):1–12. doi: 10.1007/s10464-014-9684-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Simpson DD. A conceptual framework for transferring research to practice. Journal of Substance Abuse Treatment. 2002;22:171–182. doi: 10.1016/s0740-5472(02)00231-3. [DOI] [PubMed] [Google Scholar]
- Spoth RL, Greenberg MT. Toward a comprehensive strategy for effective practitioner-scientist partnerships and larger-scale community benefits. American Journal of Community Psychology. 2005;35(3/4):107–126. doi: 10.1007/s10464-005-3388-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Spoth R, Greenberg M. Impact challenges in community science-with-practice: Lessons from PROSPER on transformative practitioner-scientist partnerships and prevention infrastructure development. American Journal of Community Psychology. 2011;48(1–2):106–119. doi: 10.1007/s10464-010-9417-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Spoth R, Greenberg M, Bierman K, Redmond C. PROSPER community-university partnership model for public education systems: Capacity-building for evidence-based, competence-building prevention. Prevention Science. 2004;5(1):31–39. doi: 10.1023/b:prev.0000013979.52796.8b. [DOI] [PubMed] [Google Scholar]
- Spoth R, Redmond C, Mason WA, Schainker L, Borduin L. Research on the strengthening families program for parents and youth 10–14: Long-term effects, mechanisms, translation to public health, PROSPER partnership scale up. In: Scheier LM, editor. Handbook of drug prevention. Washington, DC: American Psychological Association; 2015. [Google Scholar]
- Spoth R, Redmond C, Shin C, Greenberg M, Feinberg M, Schainker L. PROSPER community-university partnership delivery system effects on substance misuse through 6 ½ years past baseline from a cluster randomized controlled intervention trial. Preventive Medicine. 2013a;56:190–196. doi: 10.1016/j.ypmed.2012.12.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Spoth R, Rohrbach LA, Greenberg M, Leaf P, Brown CH, Fagan A, et al. Addressing core challenges for the next generation of type 2 translation research and systems: The Translation Science to Population Impact (TSci Impact) framework. Prevention Science. 2013b;14(4):319–351. doi: 10.1007/s11121-012-0362-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stamatakis KA, McQueen A, Filler C, Boland E, Dreisinger M, Brownson RC, Luke DA. Measurement properties of a novel survey to assess stages of organizational readiness for evidence-based interventions in community chronic disease prevention settings. Implementation Science. 2012;7(1):65. doi: 10.1186/1748-5908-7-65. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wandersman A, Chien VH, Katz J. Toward an evidence-based system for innovation support for implementing innovations with quality: Tools, training, technical assistance, and quality assurance/quality improvement. American Journal of Community Psychology. 2012;50(3–4):445–459. doi: 10.1007/s10464-012-9509-7. [DOI] [PubMed] [Google Scholar]
- Wandersman A, Duffy J, Flaspohler P, Noonan R, Lubell K, Stillman L, et al. Bridging the gap between prevention research and practice: The interactive systems framework for dissemination and implementation. American Journal of Community Psychology. 2008;41:171–181. doi: 10.1007/s10464-008-9174-z. [DOI] [PubMed] [Google Scholar]
- Weiner BJ, Amick H, Lee SY. Conceptualization and measurement of organizational readiness for change: A review of the literature in health services research and other fields. Medical Care Research Review. 2008;65(4):379–436. doi: 10.1177/1077558708317802. [DOI] [PubMed] [Google Scholar]
- Woolf SH. The meaning of translational research and why it matters. JAMA. 2008;299:211–213. doi: 10.1001/jama.2007.26. [DOI] [PubMed] [Google Scholar]