Abstract
In this study, we explored the extent to which educators discuss and prioritize Rogers’ (1995) five attributes of innovations - relative advantage, compatibility, complexity, observability, and trialability - in the context of research use. Using a directed content analysis of 54 semi-structured interviews and exemplar quotes, we describe how educators mentioned compatibility most frequently, but also commonly invoked observability and complexity in their discussions of research use. Our results also revealed key differences between educators in executive and non-executive roles. We discuss the implications of our findings for closing the research-practice gap in school-based mental health services and psychosocial interventions.
Keywords: diffusion, innovation, research use, districts, schools, directed content analysis
Public schools provide a nationwide infrastructure and promising point of access for delivering programming and services to a large, representative subset of youth in the United States. U.S. public schools enroll approximately 50 million children and adolescents, and these youth are diverse with respect to race, ethnicity, and socioeconomic status (Kena et al., 2016). Furthermore, on average, U.S. children spend over 30 hours per week in school, suggesting that school is a critical setting in their lives (Hofferth, 2009; Hofferth & Sandberg, 2001). Given these features, public schools offer significant potential to reach a diverse population of children and adolescents in need of mental health services and psychosocial interventions (Atkins, Hoagwood, Kutash, & Seidman, 2010; Atkins & Lakind, 2013; Cappella, Frazier, Atkins, Schoenwald, & Glisson, 2008; Rones & Hoagwood, 2000).
However, despite the emergence of policies designed to encourage the use of evidence-based interventions in schools (e.g., No Child Left Behind, 2001; Every Student Succeeds Act, 2015; U.S. Department of Education’s Safe and Drug Free Schools Program’s Principles of Effectiveness, 1998), there is still a research-practice gap in education (e.g., Carnine, 1997; Cook, Smith, & Tankersley, 2012; Hallfors & Godette, 2002; Neal, Neal, Kornbluh, Mills, & Lawlor, 2015a). This research-practice gap not only affects decision-making around instructional programs and practices, but also has implications for efforts to improve mental health services and psychosocial interventions in schools (e.g., Hoagwood & Johnson, 2003; Kratochwill & Stoiber, 2000; Ringeisen, Henderson, & Hoagwood, 2003). Educators often struggle to integrate research into their decision-making, and express skepticism toward research (e.g., Dagenais, Lysenko, Abrami, Bernard, Ramde, & Janosz, 2012; Finnigan, Daly, & Che, 2013; Tseng, 2012). Moreover, when they do use research, the process is often complicated and involves integration with other competing sources of evidence like prior experience or feedback from community stakeholders (Barwick, Barac, Akrong, Johnson, & Chaban, 2014; Honig & Coburn, 2008). Thus, for a variety of reasons, educators continue to use mental health services and psychosocial interventions with questionable research effectiveness (e.g., Greenberg et al., 2003; Walker, 2004; Reinke, Stormont, Herman, Puri, & Goel, 2011).
Educators’ research use includes engagement with research to consider new ideas, the application of research to decision-making about programs or practices, or the use of evidence-based programs or practices. There are many documented reasons why educators do not commonly use research about mental health programming. For example, educators often express distrust in using research, citing the advantages of using other forms of evidence such as personal experience (e.g., Boardman, Argüelles, Vaughn, Hughes, & Klingner, 2005). Additionally, mental health services and psychosocial interventions are often developed and tested without consideration of their relevance or compatibility to the school context (e.g., Atkins, et al., 2010; Hoagwood & Johnson, 2003; Ringeisen et al., 2003). However, studies on facilitators and barriers to research use in education remain largely piecemeal in nature, and most do not offer a unifying theory that can be leveraged to improve research use. Furthermore, these studies often treat educators as a single group, and therefore do not distinguish whether facilitators and barriers to research use differ by educators’ roles (e.g., principals versus teachers).
In this paper, we conceptualize research use as an innovation for educators (e.g., Coburn & Talbert, 2006; Dagenais et al., 2012). An innovation is “an idea, practice, or object that is perceived as new” by potential adopters (Rogers, 1995, p. 11). Adopting this lens allows us to explore the extent to which educators discuss and prioritize Roger’s five attributes of innovations (i.e., relative advantage, compatibility, complexity, observability, and trialability) in the context of research use. This is helpful for both research and practice because it provides a framework for thinking about potential facilitators and barriers to research use. These potential facilitators and barriers can be explored in future systematic study of the research-practice gap in education and are points of intervention to increase educators’ research use in making decisions about mental health services and psychosocial programming.
First, we start with an overview of diffusion of innovations theory, providing definitions and examples of each of five attributes theorized to encourage innovation adoption and subsequent use: relative advantage, compatibility, complexity, observability, and trialability. Although Rogers’ (1995) diffusion of innovation theory contains many other components, we focus on these five attributes because they are malleable for researchers and educators interested in promoting research use in schools. Second, we review the literature on educators’ attitudes toward research use, focusing on why research use constitutes an innovation in education and how barriers to research use map on to the five attributes of innovations. Third, using a directed content analysis of 54 semi-structured interviews with educators in two of Michigan’s largest counties about their decision-making regarding social skills programs, we explore the extent to which these five attributes of innovations are discussed and prioritized in the context of research use. We also investigate differences between educators in executive (i.e., superintendents or principals) and non-executive roles (i.e., district staff or teachers). Finally, we conclude with implications for closing the research-practice gap in school-based mental health services and psychosocial interventions and implications for future examination of research use in schools.
Diffusion of Innovations Theory
Diffusion of innovations theory, first developed by Rogers (1995) in the field of communication, is one of the most widely applied and cited theories of innovation adoption. This theory describes the mechanisms by which ideas, practices, programs, or products that are perceived as new (i.e., innovations) spread through interpersonal channels across a population of potential adopters over a period of time (i.e., diffusion; Rogers, 1995). While some theories of innovation adoption focus primarily on government policies, organizational characteristics like leadership, or individual characteristics like motivation (Wisdom, Chor, Hoagwood, & Horwitz, 2014), diffusion of innovations theory also highlights how characteristics of the innovation affect adoption decisions and subsequent use. As described by Rogers (1995), the defining characteristic of an innovation is the perception of novelty. If a potential adopter perceives an idea, practice, program, or product as new, it is an innovation for that adopter regardless of its objective novelty. Potential adopters face uncertainty when deciding to adopt and use those unfamiliar innovations, and often rely on attributes of the innovations to make these decisions. In particular, five attributes are theorized to shape potential adopters’ attitudes about and subsequent decisions to adopt innovations: relative advantage, compatibility, complexity, observability, and trialability (see Table 1).
Table 1.
Rogers’ (1995) Five Attributes of Innovations and their Application to Educators’ Research Use in Decision-Making
| Attributes | Conceptual Definition | Operational Definition in Codebook |
|---|---|---|
| Relative Advantage | “The degree to which an innovation is perceived as better than the idea it supercedes” (Rogers, 1995, p. 15). | Research is compared to an alternative source of evidence. Examples of alternative sources of evidence might include but are not limited to nothing, personal experience, teacher judgment, advice from colleagues, or anecdotes/testimonials. Relative advantage excludes cases where two sources of evidence are mentioned but a comparison is not explicitly made. |
| Compatibility | “The degree to which an innovation is perceived as being consistent with the existing values, past experiences, and needs of potential adopters” (Rogers, 1995, p. 15). | Research is discussed in terms of its match or mismatch to classroom, school, district, individual, student, or community values, beliefs, roles, past experiences, specific needs, resources, demographics, or culture. Here, discussion of a lack of available research data to fit school, district, or administrator, or student values, past experiences, needs, or work would also count. Segments may be coded as compatibility if interviewees discuss views or attitudes of the usefulness of research as being contextually bound. Segments may also be coded as compatibility if the interviewee discusses researchers as being out of touch with practice. |
| Complexity | “The degree to which an innovation is perceived as difficulty to use” (Rogers, 1995, p. 16). | Research is discussed in terms of how easy or difficult it is to use or how easy or difficult its content is to comprehend. Discussions of jargon and difficulties interpreting research results would count here. Discussions of clear language, non-technical or visual presentations of research could also count here. Finally, discussions of evidence-based practices or interactions with research that are time or resource intensive would count here. In addition, this code also includes mentions of the ease or difficulty in discerning whether something counts as research. |
| Observability | “The degree to which the results of an innovation are visible to others” (Rogers, 1995, p. 16). | Research is discussed in terms of its visibility or lack of visibility. Examples might include: discussions of access to research, observations of or interactions with other schools, districts, administrators, or organizations who are using research, direct interactions with researchers, or mentions of the source of research as having a strong reputation. |
| Trialability | “The degree to which an innovation may be experimented with on a limited basis” (Rogers, 1995, p. 16). | The degree to which a school, district, or administrator can try out or experiment with research is discussed. Examples of trialability might include references to trying out the research process is undergraduate or graduate coursework, writing about research or other opportunities to directly participate in research on a limited basis. Experimentation with research does not have to involve implementation in a school or district to count as trialability. For example, learning about research methods in a course could count here. |
Relative advantage refers to “the degree to which an innovation is perceived as better than the idea it supercedes” (Rogers, 1995, p. 15). Potential adopters weigh the pros and cons of an innovation against other ideas, practices, programs, or products that they are currently using. Rogers (1995) notes that adopters may consider economic, social prestige, convenience, and/or satisfaction factors in making a judgment about perceived relative advantage. Innovations that are perceived to hold a relative advantage over alternatives are adopted more quickly. Compatibility refers to “the degree to which an innovation is perceived as being consistent with the existing values, past experiences, and needs of potential adopters” (Rogers, 1995, p. 15). Innovations that match these values, past experience, and needs of potential adopters are adopted more quickly. Complexity refers to “the degree to which an innovation is perceived as difficult to use” (Rogers, 1995, p. 16). Innovations that are perceived by potential adopters as easy to use are adopted more quickly. Observability refers to “the degree to which the results of an innovation are visible to others” (Rogers, 1995, p. 16). Innovations that can be observed by potential adopters are adopted more quickly. Finally, trialability refers to “the degree to which an innovation may be experimented with on a limited basis” (Rogers, 1995, p. 16). Innovations that potential adopters can try without making a full-scale investment are adopted more quickly.
Educators’ Research Use as an Innovation
Diffusion of innovations theory may provide important insights into narrowing the research-practice gap in education. Here, the innovation is educators’ conceptual or instrumental research use (Weiss & Bucavalas, 1980; Weiss, 1998). Conceptual research use occurs when educators’ engagement with research leads them to consider new ideas or to see their current practice in a new light, while instrumental research use occurs when educators use research to make decisions about programs or practices or when they use evidence-based programs or practices.
Studies of educators’ conceptions of and attitudes toward research suggest multiple reasons to believe that research use is an innovation for educators in their daily practice. First, educators are not always familiar with what constitutes research. For example, Coburn and Talbert (2006) found that the majority of central office and building staff in a large urban district held vague conceptions of research. These vague conceptions of research may make it difficult for educators to engage in conceptual or instrumental research use. Second, many educators lack self-assurance or a supportive normative environment to use research. For instance, Williams and Cole (2007) found that many teachers reported a lack of confidence in their ability to locate and interpret research findings. Teachers in this study also expressed concerns that their use of research evidence might be viewed negatively or perceived as different by their peers. For these reasons, it is not surprising that a recent review of educators’ research use in the U.S., Canada, U.K., and Australia found that educators rarely report reading research, using research findings, or applying research to their practice, and concluded that “the use of research-based information is hardly a significant part of the school-practice scenario” (Dagenais et al., 2012, p. 296).
Rogers’ (1995) five attributes of innovations might be helpful for understanding what characteristics of research facilitate or hinder educators’ decisions to use it. To date, studies of district officials (e.g., Coburn, Honig, & Stein, 2009; Coburn & Talbert, 2006, Corcoran, Fuhrman, & Belcher, 2001), principals (e.g., Cousins & Leithwood, 1993, Cousins & Walker, 2000, Everton, Galton, & Pell, 2000), and teachers (e.g., Boardman et al., 2005; Berstock-Sharratt, Drill, & Miller, 2011; Hultman & Hörberg, 1998; Williams & Cole, 2007) suggest that relative advantage, compatibility, complexity, observability, and trialability each play an important role in educators’ research use.
Relative Advantage
There is some evidence that educators weigh research use against the use of other forms of knowledge that they commonly apply in decision-making. Central district officials and teachers (Bartels, 2003; Coburn & Talbert, 2006; Kennedy, 1997; Hultman & Hörberg, 1998) often privilege their own personal experiences over the use of formal research evidence. One study found that teachers in Sweden rated their own personal experience as more important than research reports, journals, or advice from specialists (Hultman & Hörberg, 1998), while another study found that teachers preferred a practice-oriented teaching article to a research article because it was more in touch with their own personal experiences (Bartels, 2003). Given this preference for personal experience, educators may rely on personal testimonials or anecdotes when making decisions (e.g., Corcoran et al., 2001) and some have called for research to include teacher narratives to improve its uptake (Cook et al., 2012). In addition to a preference for personal experience, educators value advice from colleagues as an important source of knowledge in decision-making (Behrstock-Sherratt et al., 2001; Corcoran et al., 2001; Hultman & Hörberg, 1998; Latham, 1993; Williams & Cole, 2007). Educators are more likely to turn to colleagues than to the research literature because it is easier to turn to their colleagues down the hall for advice than to sift through the research literature (Behrstock-Sherratt et al., 2001), and because they often value and trust advice from the colleagues more than information from educational research (e.g., Corcoran et al., 2001; Latham, 1993).
Compatibility
Educators may trust other forms of knowledge such as personal experience and colleagues’ advice more than research because they perceive these forms of knowledge as more compatible with their needs, goals, values, or the student populations in their schools or districts. Indeed, educators often view research or evidence-based practices as incompatible with their work in classrooms or schools (Bartels, 2003; Boardman et al., 2005; Farley-Ripple, 2012; Kennedy, 1997; Latham, 1993; Long, Sanetti, Collier-Meek, Gallucci, Altschaefl, & Kratochwill, 2016). This perceived incompatibility may stem from the different agendas of researchers and educators (e.g., Bartels, 2003; Coburn, Penuel, & Geil, 2013; Hultman & Hörberg, 1998), mismatched expectations of researchers and educators for the timing of study results (Coburn et al., 2009), or the lack of available data on issues that matter to educators (Kennedy, 1997). Additionally, efficacy research on mental health interventions has largely ignored the realities of the school context (Hoagwood & Johnson, 2003). On the other hand, when educators perceive research as compatible with their needs or the needs and demographics of their students, use is improved (e.g., Barwick et al., 2014; Behrstock-Sherratt et al., 2011; Cook et al., 2012; Cousins & Leithwood, 1993; Dagenais et al., 2012). Research use is more likely to be viewed as compatible when the topics or results of research match educators’ current practices or beliefs (Coburn & Talbert, 2006; Corcoran et al., 2001), and by educators with higher levels of personal participation in research or higher levels of personal teaching self-efficacy (Cousins & Walker, 2000).
Complexity
Educators’ research use may depend on the degree to which they perceive research as easy to understand or interpret. Past studies suggest that educators often view the research literature as esoteric and filled with jargon (Bartels, 2003, Coburn et al., 2009; Corcoran et al., 2001; Latham, 1993). Educators struggle to interpret statistically complex results and to make sense of abstract study findings (Behrstock-Sherratt et al., 2011). There is often an explicit preference among educators for research findings that are clearly synthesized and that are written in plain language (Bartels, 2003; Barwick et al., 2014). Therefore, increasing educators’ research use may require the dissemination of findings in a clear, non-technical manner (Carnine, 1997; Hultman & Hörberg, 1998). Likewise, strengthening educators’ capacity to comprehend and apply research findings may increase their use (Cousins & Walker, 2000; Lysenko et al., 2014; Williams & Cole, 2007). Complexity may also play a role in educators’ limited use of evidence-based programs and practices. Educators often cite the time and resource intensive nature of evidence-based interventions as a barrier to use (Boardman et al., 2005; Long et al., 2016).
Observability
The observability or visibility of research may play a role in its use by educators. Educators’ lack of awareness of and access to research is a well-documented problem in the literature (Behrstock-Sherratt et al., 2011; Corcoran et al., 2001; Farley-Ripple, 2012; Hultman & Hörberg, 1998; Kennedy, 1997; Latham, 1993; Williams & Cole, 2007). Research studies are often hard to locate and results are often not provided in a timely fashion to be used for educational decision-making. Improving the communication quality and timeliness of research findings may lead to increased visibility among educators and subsequent use (Dagenais et al., 2012; Cousins & Leithwood, 1993). Another aspect of observability is the reputation of the source of the research. Educators are more likely to perceive research as credible and are consequently more likely to use it when it comes from an observable source with a strong reputation in the educational community (Coburn & Talbert, 2006). Finally, opportunities to connect with others who are engaged in research may enhance educators’ research use. Educators are more likely to use research when they have participated in activities designed to bring them in contact with other who are using research (e.g., research-based discussions with peers; contact with people distributing research; Lysenko et al., 2014). Similarly, educators expressed a preference for opportunities to see evidence-based programs or policies modeled by others (Barwick et al., 2014; Boardman et al., 2005; Corcoran et al., 2001).
Trialability
While observability focuses on the visibility of an innovation and its benefits, trialability focuses on the opportunity to directly experiment with an innovation. Educators are more likely to have positive attitudes toward, and are more likely to use research when, they have had prior opportunities to directly participate in research activities (Cousins & Walker, 2000; Dagenais et al., 2012). For example, during their undergraduate or graduate coursework, educators often have the opportunity to conduct assignments that expose them to research activities. These types of assignments allow educators to try out the research process and to become more familiar with research methods (e.g., Cousins & Walker, 2000; Farley-Ripple, 2012; Miretsky, 2007). Additionally, educators may have opportunities to engage in action research projects (e.g., Cousins & Walker, 2000) or research practice partnerships (e.g., Coburn et al., 2013) that allow them to try out different aspects of research. The theme of trialability also appears in research on the use of evidence-based programs and practices. Specifically, Boardman et al (2005) found that teachers were more likely to try bits and pieces of new evidence-based programs that were perceived as fitting their needs.
Differences by Role
To date, much of the literature has focused on educators in only one or two positions (i.e., teachers, principals), and often do not examine differences across these positions. Prior research has suggested mixed evidence of the role of hierarchy in educators’ research use with some studies suggesting that executives (e.g., superintendents, principals) are more likely to use research than non-executives (e.g., teachers, staff) and other studies suggesting the reverse (e.g., Coburn & Talbert, 2006; Cousins & Leithwood, 1986; Dagenais et al., 2012; Lysenko et al., 2014). Moreover, we are limited in our knowledge of whether executives and non-executives tend to value different characteristics of research. One study found that teachers and district staff were much more likely to value the compatibility of research with teacher judgment than superintendents or principals, suggesting that different types of educators may value different aspects of research given their distinct roles (Coburn & Talbert, 2006). However, more research is needed to determine how educators in executive and non-executive roles prioritize each of Rogers’ (1995) characteristics of an innovation when considering research use.
The Current Study
Although the literature provides some support for the importance of Rogers’ (1995) five attributes of innovations for educators’ research use, there are two gaps that we aim to fill in the current study. First, the literature on educators’ research use remains piecemeal and studies have not typically framed research use as an innovation for educators or simultaneously examined the range of attributes of innovations that might influence research use. Second, there is limited knowledge about whether different attributes of innovations are more pressing for certain educator positions than others. Therefore, employing semi-structured interviews with 54 Michigan educators, we explore the extent to which educators discuss and prioritize Rogers’ (1995) five attributes of innovations in the context of research use, and investigate differences across educators in executive (i.e., superintendents or principals) and non-executive roles (i.e., district staff or teachers).
Method
Setting and Sample
Data for this study were collected from public school educators working in two Michigan counties, Lake and River1, which are among the ten most populous counties in the state, with 2015 populations over 250,000 (U.S. Census Bureau Population Division, 2016). Over 70% of residents in both counties were White. The median household income in Lake County was slightly over $60,000 while the median household income in the River County was slightly over $45,000 (U.S. Census Bureau, 2014). Both counties included a mix of rural and urban areas, as well as at least one doctorate-granting university.
To develop a population sampling frame, we asked district superintendents and county-level educational administrators to identify individuals who play a role in selecting and deciding to use school programs focused on improving students’ social skills and mental health in their district, or in districts in their county. Additionally, at the conclusion of each interview conducted for this study, we asked the interviewee to identify such individuals. This referral-based process is an example of what Palinkas et al. (2015) call criterion-i sampling, and mirrors Coburn and Talbert’s (2006) approach in their study of educators’ conceptions of evidence use; it generated a list of 100 names, which constituted our population. We attempted to interview 85 (85%) of these educators, sampling them from the population to maximize variation by district and role, using what Palinkas et al. (2015) call maximum variation sampling. Of those we contacted for an interview, 54 responded and chose to participate (N = 18 in Lake County, N= 36 in River County), yielding a response rate of 63.5%.
Table 2 summarizes the demographic characteristics of our sample, as well as the characteristics of the districts and buildings in which they worked. On average, educators in our sample had worked in their current district for a total of 14.05 years (SD= 11.10) and had worked in their current position for 5.97 years (SD=5.65). The majority of educators in our sample were women (n= 36, 66.67%), and almost all (n= 53, 98.15%) had received a post-baccalaureate degree. A total of 42 educators (77.78%) identified as White, 9 (16.67%) identified as Black or African American, 2 (3.7%) identified as “Other Race” or Multiracial, and 1 did not report a race (1.85%). Two educators (3.70%) identified as Hispanic or Latino(a). Finally, 24 educators (44.44%) served in executive roles (i.e., superintendents, assistant superintendents, principals, assistant principals) while 30 educators (55.56%) served in non-executive roles (i.e., directors, supervisors, teachers, counselors, social workers, or nurses).
Table 2.
Participant Demographics and Workplace Characteristics
| Participants (N= 54) | Districts (N = 8) | ||
|---|---|---|---|
| Level | Mean Size | 9,506 Students (SD = 4,773) | |
| District | 17 (31.48%) | Mean % Minority | 52.44% (SD = 24.20) |
| School Building | 37 (68.52%) | Mean % Reading Proficient | 27.08% (SD = 7.31) |
| Role | |||
|
|
|||
| Executive | 24 (44.44%) | Buildings (N = 23) | |
|
|
|||
| Non-Executive | 30 (55.56%) | Mean Size | 573 Students (SD = 319) |
| Mean % Minority | 31.81% (SD = 19.06) | ||
| Race | Mean % Reading Proficient | 38.44% (SD = 12.53) | |
| White | 42 (77.78%) | ||
| Black or African American | 9 (16.67%) | ||
| Other Race/Multiracial | 2 (3.7%) | ||
| Missing | 1 (1.85%) | ||
| Ethnicity | |||
| Hispanic or Latino(a) | 2 (3.70%) | ||
| Sex | |||
| Female | 36 (66.67%) | ||
| Male | 18 (33.33%) | ||
| Average Length of Time in Current District | 14.05 years (SD= 11.09) | ||
| Average Length of Time in Position | 5.97 years (SD= 5.65) | ||
The educators in our sample were employed across 8 districts (3 in Lake County, 5 in River County). Seventeen (31.48%) of them were employed in their district’s central office. These districts were small to medium sized, with student enrollment ranging from under 2,500 to over 16,000 (M= 9,506.35, SD= 4,773.62). These districts were characterized by wide variation in student demographic composition and academic performance. The enrollment of students from underrepresented racial groups in these districts ranged from 9.45% to 72.88% (M=52.44%, SD=24.20). Likewise, the percentage of students demonstrating proficiency in reading on statewide, standardized tests ranged from 17.60% to 39.25% (M= 27.08%, SD=7.31).
The other 37 (68.52%) educators in our sample were employed in 23 school buildings. These educators were employed in buildings ranging in student enrollment from under 250 to over 1200 (M=573.41, SD=319.18). The buildings also exhibited variation in student demographic composition and academic performance. The enrollment of students from underrepresented racial groups in these school buildings ranged from 3.95% to 71.85% (M=30.81%, SD=19.06), while reading proficiency ranged from 20.09% to 67.58% (M=38.44%, SD=12.53).
Measures and Procedures
Each educator participated in a semi-structured interview conducted in Spring 2013 as part of a pilot study (N = 14) or in Spring 2016 as part of a larger study (N = 40). In the pilot study, a donation of $500 was made to each of the participating school districts, while in the larger study, each participant received a $30 Amazon gift card as an individual participation incentive. Because the same interview protocol was used in both the 2013 and 2016 data collection, and because we did not observe substantive differences in the pattern of codes discussed below in the results, we present our findings for the pooled sample.
The semi-structured interview protocol used for all interviews included a broad set of questions related to educators’ experiences searching for information about and deciding to use social skills programs and practices. In this analysis, we focus on educators’ responses to a series of questions designed to assess perceptions of research, which represents approximately one-third of the full interview. Each of these questions and follow-up probes are listed in Appendix A. The interviews were recorded after securing the participant’s consent and the recordings were transcribed verbatim by a team of undergraduate and graduate students.
Data Analysis Plan
In this study, we aimed to explore the extent to which educators discuss and prioritize Rogers’ (1995) five attributes of innovations in the context of research use, and to investigate differences across educators in executive and non-executive roles. Prior theory and research have identified relative advantage, compatibility, complexity, observability, and trialability as attributes that are important for adopting innovations (Rogers et al., 1995). However, less is known about how educators prioritize these attributes when deciding to use research. Therefore, drawing on available research on diffusion of innovations (Rogers, 1995), rather than examining our qualitative data using an entirely exploratory open-coding process, we instead selected directed content analysis, which involves identifying and operationalizing a priori coding categories, and is useful when the goal of research is to extend or provide support for an existing theory (Hsieh & Shannon, 2005).
We used diffusion of innovations theory and the existing literature on educators’ perceptions of research to develop a codebook with operational definitions for each of Rogers (1995) five attributes of an innovation (see Table 1). To guide coders, the full codebook also included guidance on the essential features of each code, key words and phrases that might signify a code, and example quotes. To facilitate applying these codes to interview transcripts in a standardized way that would permit testing of coder reliability, each interview was partitioned into roughly 100 word segments, which served as the coding unit for the study. The choice of 100 word segments is arbitrary, but was the result of experimentation in preliminary coding during codebook development by the first and second authors, and represents a balance between shorter segments that provider higher resolution and longer segments that provide more context. The interview coding process involved identifying, in each segment, mentions of any of Rogers’ (1995) attributes of an innovation.
The first and second authors independently coded and then came to consensus to create gold standard codes for three interviews. These gold standard codes were then used to train three coders until they achieved reliability. Following training, two coders independently coded each of the remaining interviews. After a first round of coding, these two coders met to discuss any discrepancies, then recoded the interviews until they achieved reliability. During both the training stage and the regular coding stage, we measured coder reliability as the percent agreement between the two coders (or with the gold standard) on the number of times a specific attribute was mentioned in a specific interview. For example, if coder A identified 10 mentions of observability by interviewee #1, and Coder B identified 9 mentions, we computed the reliability of observability in Interview #1 as 9/10 or 90%. The coding of an interview-code pair was deemed reliable when two coders achieved at least 80% agreement.2 For the purposes of analysis, after achieving reliability, the two coders’ counts of the frequency of a specific attribute in a specific interview were averaged. In the above example, this would appear as 9.5 mentions of observability by interviewee #1.
Our final coded data reflect the number of times that each of Rogers’ (1995) five attributes of innovations appeared in educators’ interviews. Each attribute appeared in at least half of the interviews, and each educator mentioned at least one of these attributes. To analyze these data, we employed a rank order comparison technique to explore the extent to which educators prioritized each of these attributes of innovations in their discussions of research use (Curtis, Wenrich, Carline, Shannon, Ambrozy, & Ramsey, 2001; Humble, 2009). To supplement the rank order comparison and to ensure that we had appropriately considered the content and meaning of our interviewees’ discussions, we also carefully examined our coded qualitative data and provided exemplar quotes to provide depth to our analysis. To determine whether educators in executive roles (e.g. superintendent, principal) differ from educators in non-executive roles (e.g. teachers, staff), we repeated this analysis separately for educators in these two subgroups. To summarize the stages of our data analysis plan, we began by conducting a directed content analysis of semi-structured qualitative interview transcripts to code for instances of relative advantage, compatibility, complexity, observability, and trialability. Next, we generated counts of code frequencies that were analyzed using rank order comparisons. Finally, we returned to the original interview transcripts to provide depth and context to the derived results of these rank order comparisons.
Results
Rank Order Comparison of Rogers’ (1995) Attributes of Innovations
Table 3 provides the average number of times that each of Rogers’ (1995) five attributes of innovations appeared in educators’ interviews, for the full sample and separately for executives and non-executives.
Table 3.
Rank Order of Average Frequencies of Codes Representing Each of Rogers’ (1995) Five Attributes of Innovations for Whole Sample (N= 54), Executives (N=24), and Non-Executives (N=30)
| Rank Order of Attributes | Percent of Interviews Mentioning Attribute | Full Sample M (SD) | Executives M (SD) | Non-Executives M (SD) |
|---|---|---|---|---|
| 1. Compatibility | 98.1% | 9.04 (5.35) | 9.71 (5.41) | 8.50 (5.33) |
| 2. Observability | 92.6% | 5.83 (4.40) | 7.44 (4.64) | 4.55 (3.80) |
| 3. Complexity | 87.0% | 3.82 (2.73) | 4.58 (2.65) | 3.22 (2.67) |
| 4. Relative Advantage | 68.5% | 1.52 (1.56) | 1.75 (1.45) | 1.33 (1.65) |
| 5. Trialability | 51.9% | 1.26 (1.97) | 1.60 (2.16) | 0.98 (1.79) |
Compatibility
When discussing research use, educators in our sample most frequently mentioned its compatibility with goals, needs, values, or personal experiences. On average, educators’ mentioned compatibility 9.04 times (SD= 5.35). Compatibility was commonly invoked as a facilitator to research use. Educators often mentioned that it was easier to use research when it was conducted in schools or districts with similar student populations and/or resources or when findings fit with their own personal experiences. For example, when asked what makes research useful, educators in different counties gave similar responses. A River County District director noted that:
Educator: For me, it was the schools that were participating were really strong fits to the schools here in <School District>. So it was a nice- it was a same kind of demographic match. Not in all cases of course, but in the majority of the schools. The other thing that really made that piece of research powerful for me is that I guess it- it resonated internally with my own experience as a teacher and administrator. (River County District Director)
Additionally, a Lake County principal stated that:
Educator: Well…when I look at research, I like to see if the demographics is comparable to my demographics, so I can compare it because every community is different. So, I want to see if there’s some correlation between communities (Lake County Principal).
A lack of compatibility was also invoked as a barrier to research use. For instance, educators expressed skepticism toward researchers who had little experience working in schools:
A few years ago, there was something called the <Name of Principal Meeting> that I was involved in because we were a high priority school and I was the principal at one of our high schools. And I would go to these all weekend workshops, and would hear you know research says, research says, research says, research says. And, you know, I finally asked the people who were part of all of this, I said “Okay, so you’re telling me that you’re steeped in this research.” You know, whatever. “Have any of you ever been in an urban high school?” And, they’re all like “Well, no”. “Have you ever been to any of the high schools in <School District>?” “No.” Then how can you even say that this research is appropriate if you’ve never been in the setting to even try the research out yourself? (another River County District Director)
The high frequency of compatibility codes suggests that educators regularly weigh whether the research fits with their own perspectives and values, as well as with their districts, schools, and communities. In turn, these perceptions of fit drive educators’ decisions to use or forgo research use.
Observability
Observability was the second most frequently mentioned attribute in educators’ discussions of research use. On average, educators’ mentioned observability 5.83 times (SD= 4.40). Some mentions of observability focused on educators’ access to research. Specifically, educators sometimes expressed concerns that research is not always easily available to them:
Interviewer: So are there other aspects that make research useful for making decisions about school programming?
Educator: I’m just thinking about the availability of it. I mean…how do you get research into our hands so that we can make good use of it? (River County Principal)
Other educators focused on the observability of the source of research, noting that certain sources are more reputable than others. They used reputation as a means of judging the credibility and quality of research. For instance, in River County, a district director noted that:
Okay, you know there’s certainly a lot of credibility if something comes out of Michigan State University just because I love Michigan State, but Harvard, Yale…I mean places with good reputations mean more. You know, if you told me the Huffington Post did research on something, I would say “Really?”. (River County District Director)
Similar, in Lake County, a school staff member provided a similar response when asked about judging quality:
Educator: Well, I put a lot of faith in whoever the organization is so some of that stands on reputation with…If it’s something you’ve just never heard of you kind of have to think: Well does this really fit? Is there other stuff out there? (Lake County District School Staff Member)
Additionally, educators described the importance of observing or hearing about research use or the use of evidence-based programs in other schools or districts, especially those similar to their own. Often times, they used these observations to judge whether they should use a particular piece of research or evidence-based programs:
For me, research is again seeing the program in action at a school, being able to go to the school district’s websites to find out if the program worked or if it didn’t. In other words, hearing the success stories. (Lake County Principal)
Complexity
Complexity, or the degree to which research or evidence-based programs are easy or difficult to use, was the third most frequently mentioned attribute in educators’ discussions of research use. On average, educators’ mentioned complexity 3.82 times (SD= 2.73). Educators sometimes expressed difficulties discerning what counts as research:
When you look something up or you Google a topic, I’m not sure that I or our teachers even know what’s valid research and what’s not. (River County Principal)
When describing pieces of research that were useful in decision-making, educators often discussed using simple, digestible language and visual formats. For example, a Lake County principal noted that:
I think sometimes with research, you get so heavy with the terminologies and academic style language that you don’t make it easy. (Lake County Principal)
Additionally, as another Lake County principal explained:
So, you know, most- we’ll say most- I’ll give you probably about 75% of my staff are visual learners. I can give them numbers and numbers and numbers out of the wazoo, but as soon as I make it colorful, a pie chart, a bar graph…now they can seem to understand. (another Lake County Principal)
Relative Advantage
Discussions of relative advantage were somewhat rare in our data. On average, educators’ mentioned relative advantage just 1.52 times (SD= 1.56). Educators who mentioned relative advantage often highlighted the benefits of research over alternatives like anecdotes, noting that research use is better for improving student outcomes and preserving limited resources:
Well, I’m someone who values anecdotal experiences. I really am. Um…but I think for this…for the purposes of um what we’ve been asked to do in terms of where we’re putting money and where we’re putting time and where we’re pulling kids from other learning opportunities and saying we’re gonna do this, it has to be that much more rigorous research base that we’re calling from (River County School Staff).
However, educators also often highlighted benefits to personal and staff experience, especially for understanding how a program works in their own district or for building trust. For example, one educator described how staff experience can hold advantages over research from reputable, observable sources:
When you can say this person works at Michigan State University or University of Michigan and they’ve found this to be a valid and reliable program, that goes farther. However, I can also go down the hall and talk to five different teachers about practices that are evidence-based that they see working and it may not be a formalized product or program and to me there’s a lot of value in that and that goes back to teachers trusting each other. (River County Principal)
Trialability
Educators’ discussions of trialability were even more rare than their discussions of relative advantage. On average, educators’ mentioned trialability just 1.26 times (SD= 1.97). While some educators focused on experimenting with a particular evidence-based program or with data collection in their district, it was more common for educators to mention the importance of exposure to research in college. For example, when responding to questions about how others in their district feel about research, a Lake County principal said:
Those who are recently taking classes and had to really get involved with their coursework probably receive it a lot easier than someone who has been outta school for a little bit. (Lake County Principal)
A River county principal had a similar response:
I’ve been in the system long enough to know that when people wanna challenge ideas they want something- they want you to provide some evidence that it works. But for the most part, I think teachers- those who are coming outta college, who are fresh out of the universities - They have a different perspective about the role of research (River County Principal)
Comparing Differences Across Role
Because educators are not a uniform group, they may exhibit distinct patterns in the extent to which they mention each of Rogers’ five attributes of innovations. When repeating this rank order analysis separately for those in executive and non-executive roles (last two columns in Table 3), we observe three noteworthy patterns. First, the rank order of each of the codes within each of these two subgroups was stable and mirrored results from the whole sample. Second, the frequency of compatibility codes was high in both executives’ (M= 9.71, SD= 5.41) and non-executives’ (M= 8.50, SD=5.33) discussions of research use. Finally, executives mentioned each attribute more often than the full sample average, while non-executives mentioned each attribute less often than the full sample average. Of note, executives mentioned observability (M= 7.44, SD= 4.63) more than 50% more often than non-executives (M=4.55, SD=3.80).
Discussion
Public schools are promising venues for delivering evidence-based mental health services and psychosocial interventions to children and adolescents (e.g., Atkins et al., 2010; Cappella et al., 2008; Rones & Hoagwood, 2000). However, there is still a research-practice gap in decision-making about mental health services and psychosocial interventions in schools (e.g., Hoagwood & Johnson, 2003; Kratochwill & Stoiber, 2000; Ringeisen et al., 2003). This study adds to a developing body of literature on educators’ perceptions of research by conceptualizing research use as an innovation in educators’ daily practice and examining the extent to which each of Rogers’ (1995) five attributes of innovations were invoked in educators’ discussions of research. By exploring how often educators mention all five attributes in their discussions of research use, we are able to highlight key features needed to narrow the research-practice gap in educational settings.
The results of our study suggest that educators consider Rogers’ (1995) five attributes of innovations in their discussions of research use. Each of these attributes was mentioned by a majority of educators in their interviews. Additionally, all educators mentioned at least one of these attributes during the course of their interviews. Although we observed mentions of all five of Rogers’ (1995) attributes in educators’ discussions of research use, our findings point to the primacy of compatibility. Across role (e.g., executive or non-executive), educators asked the question, Does it fit?, when considering research use or evidence-based programs. Educators described using research when it resonated with their own personal experiences or when it fit their context (e.g., school, district, community). In contrast, they expressed skepticism about research that ran counter to their personal experiences or that was developed without consideration of their specific context. Based on this finding, we hypothesize that research and evidence-based programs that prioritize compatibility with school and district contexts will be more likely to be adopted and used by educators. However, as many have noted, the development and testing of mental health services and psychosocial interventions has often proceeded with limited consideration of compatibility (Atkins et al., 2010; Hoagwood & Johnson, 2003; Ringeisen et al., 2003). Participatory approaches, such as research-practice partnerships (e.g., Coburn et al., 2013), that involve educators in the development and testing stages may prove useful for improving compatibility. Additionally, clearly communicating when, where, and for whom an intervention works may help educators better assess compatibility (Gottfredson et al., 2015).
Observability and complexity were also common in educators’ discussions of research use. Consistent with past research, educators described access to research (e.g., Behrstock-Sherratt et al., 2011), the reputation of the source of research (e.g, Coburn & Talbert, 2016), and opportunities to observe research and evidence-based programs in other districts and schools (e.g., Boardman et al., 2015) as key facilitators of use. Therefore, we hypothesize that research and evidence-based programs that are more visible and accessible or that seek to maximize ease of use will be more likely to be adopted and used by educators. Efforts to boost observability might focus on connecting educators to key individuals or organizations that can broker information about research and evidence-based programs (e.g., Neal et al., 2015a; Neal, Neal, Lawlor, & Mills, 2015b) or providing opportunities for educators in different districts to connect around their experiences using evidence-based programs. Furthermore, educators described difficulties identifying what counts as research and, similar to past research (e.g., Bartels, 2003; Barwick et al., 2014), expressed a keen interest in syntheses of research that are visual in nature and easy to digest. These findings suggest a need for researchers to strategically tailor research findings in user-friendly formats that are sensitive to educators’ needs and skillsets (e.g., Wandersman et al., 2008).
Our results revealed some key differences in the degree to which educators in distinct roles prioritized each of Rogers’ (1995) five attributes of innovations in discussions of research use. Specifically, in discussing research use, executives (e.g., superintendents and principals) mentioned each attribute more often than non-executives (e.g., teachers and district staff). In particular, executives mentioned observability more than 50% more often than non-executives. These differences align, in part, with the unique responsibilities assumed by educators in each of these distinct roles. Because executives are typically responsible for decision-making in their district or school building, they may be more likely to think about each of Rogers’ (1995) attributes when considering research use than non-executives. In this context, observability may often take the form of executives asking themselves: Can I see that it works? Considerations of observability, in particular, could help executives ensure that their use of research or evidence-based programs will lead to positive outcomes and will not end up wasting limited time or resources. Overall, future research is needed to determine whether differing responsibilities or alternative factors explain differences by educator role in the prioritization of attributes. However, based on these results, we hypothesize that consideration of observability when tailoring research or evidence-based programs may be especially important for encouraging executives’ research use.
Although our findings suggest that the attributes described in Rogers’ (1995) original theory are a good fit to the data, it is possible that additional attributes are relevant but missing. For example, Wisdom et al (2014) describe the riskiness and cost efficacy of an innovation as factors that influence adoption. Moreover, while we have focused on innovation attributes because they are likely to be malleable features that can be altered, Rogers’ (1995) diffusion of innovation theory and additional research also highlights the importance of contextual features such as communication channels (e.g., Finnigan et al., 2013; Neal et al., 2015a; Neal et al., 2015b) and other external political and organizational factors in influencing adoption (Neal, Neal, Mills, & Lawlor, in press; Wisdom et al., 2014). Future research could expand our work to consider the role that these additional factors play in educators’ perceptions of and use of research evidence. In this study, we focused on a small and geographically bounded sample of educators. Future research is needed to determine whether our findings generalized to educators in other regions and whether there are demographic differences in the extent to which educators’ prioritize each of Rogers’ (1995) five attributes of innovations in their considerations of research use. Finally, it is important to develop and test strategies that target Rogers’ (1995) attributes of innovation. For educators, these strategies might include building increased opportunities for involvement in the research process to boost compatibility and observability, and training to improve the interpretation of research findings to decrease complexity. For researchers, these strategies might include better training in participatory approaches to boost compatibility and the development of dissemination and design tools to increase observability and decrease complexity. These efforts hold promise for not only improving our understanding of the research-practice gap in education, but also for narrowing this gap in the future.
Appendix A. Interview Protocol
The full interview began with a series of questions asking the educator to describe a recent social skills program or practice that had been adopted or considered for adoption. These questions focused on how they (or their district) learned about the program, how they evaluated it for adoption, and who participated in the process. The questions in this section did not directly ask about “research” or “evidence” and the interviewer was instructed to not raise these issues.
The second part of the interview, which is the focus of this study, included the following questions about research:
Next, we’d like to talk about your thoughts on research about school programs, including what role research does play and you think it should play in decisions about school programming.
-
When you hear the word “research” in the context of school-based programs, what kinds of things do you think of?
-
PROBES –
What doesn’t count as “research” for you?
-
What, if anything, makes research useful for making decisions about school programs?
-
Can you tell me about a piece of research that was especially helpful to you in the past year?
-
PROBES (only if yes) –
What was it?
How did you find it?
How did you know if it was high quality?
How was it useful to you?
What made it useful? (quality, format, topic)
Did others also find it useful or important?
-
-
How do you think others in your district feel about research?
-
PROBES –
Who are you including? (administrators, teachers, parents, etc.)
-
The interview concluded with basic demographic questions about work history, education, and racial, ethnic, and gender identity.
Footnotes
County names are pseudonyms to protect the confidentiality of study participants.
Because this method of calculating reliability could potentially result in high scores even if coders identified different segments for each code in an interview, we also calculated coder reliability by segment-code pairs. Over all of the segment-code pairs in our study, our coders demonstrated high inter-rater reliability (κ = .87). This suggests that coders were consistently coding attributes in the same segments.
Compliance with Ethical Standards: This study was funded by Officer’s Research Award (#182241) and a Use of Research Evidence Award (#183010) from the William T. Grant Foundation. Additional support for this research also came from an R21 research grant from the National Institute of Mental Health (#1R21MH100238-01A1). All authors of this manuscript (i.e., Jennifer Watling Neal, Zachary P. Neal, Jennifer A. Lawlor, Kristen J. Mills, and Kathryn McAlindon) declare that they have no conflicts of interest. This research was approved by Michigan State University’s IRB (#x12-1011e, #x14-706e, #x14-1173e). All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study.
References
- Atkins MS, Hoagwood KE, Kutash K, Seidman E. Toward the integration of education and mental health in schools. Administration and Policy in Mental Health and Mental Health Services Research. 2010;37:40–47. doi: 10.1007/s10488-010-0299-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Atkins MS, Lakind D. Usual care for clinicians, unusual care for their clients: Rearranging priorities for children’s mental health services. Administration and Policy in Mental Health and Mental Health Services Research. 2013;40:48–51. doi: 10.1007/s10488-012-0453-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bartels N. How teachers and researchers read academic articles. Teaching and Teacher Education. 2003;19:737–753. [Google Scholar]
- Barwick MA, Barac R, Akrong LM, Johnson S, Chaban P. Bringing evidence to the classroom: Exploring educator notions of evidence and preferences for practice change, International Education Research. 2014;2(4):1–15. [Google Scholar]
- Behrstock-Sherratt E, Drill K, Miller S. Is the supply in demand? Exploring how, when and why teachers use research (revised Ed) Washington DC: American Institutes for Research; 2011. [Google Scholar]
- Boardman AG, Argüelles ME, Vaughn S, Hughes MT, Klingner J. Special education teachers’ views of research-based practices. The Journal of Special Education. 2005;39:168–180. [Google Scholar]
- Carnine D. Bridging the research-to-practice gap. Exceptional Children. 1997;63(4):513–521. [Google Scholar]
- Cappella E, Frazier SL, Atkins MS, Schoenwald SK, Glisson C. Enhancing schools’ capacity to support children in poverty: An ecological model of school-based mental health services. Administration and Policy in Mental Health and Mental Health Services Research. 2008;35:395–409. doi: 10.1007/s10488-008-0182-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Coburn CE, Honig MI, Stein MK. What is the evidence on districts’ use of evidence? In: Bransford JD, Stipek DJ, Vye NJ, Gomez LM, Lam D, editors. The Role of Research in Educational Improvement. Harvard Education Press; 2009. pp. 67–86. [Google Scholar]
- Coburn CE, Penuel W, Geil KE. Research-practice partnerships: A strategy for leveraging research for educational improvement in school districts. New York: William T. Grant Foundation; 2013. [Google Scholar]
- Coburn CE, Talbert JE. Conceptions of evidence use in school districts: Mapping the terrain. American Journal of Education. 2006;112:469–495. [Google Scholar]
- Cook BG, Smith GJ, Tankersley M. Evidence-based practice in education. In: Harris KR, Graham S, Urdan T, editors. APA Educational Psychology Handbook: Vol 1. Theories, Constructs, and Critical Issues. Washington DC: APA; 2012. pp. 493–525. [Google Scholar]
- Corcoran T, Fuhrman SH, Belcher CL. The district role in instructional improvement. Phi Delta Kappan. 2001;83:78–84. [Google Scholar]
- Cousins JB, Leithwood KA. Enhancing knowledge utilization as a strategy for school improvement. Knowledge: Creation, Diffusion, Utilization. 1993;14:305–333. [Google Scholar]
- Cousins JB, Walker CA. Predictors of educators’ valuing of systematic inquiry in schools. Canadian Journal of Program Evaluation, Special Issue. 2000:25–52. [Google Scholar]
- Curtis JR, Wenrich MD, Carline JD, Shannon SE, Ambrozy DM, Ramsey PG. Understanding physicians’ skills at providing end-of-life care: Perspectives of patients, families, and health care workers. Journal of General Internal Medicine. 2001;16:41–49. doi: 10.1111/j.1525-1497.2001.00333.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dagenais C, Lysenko L, Abrami PC, Bernard RM, Ramde J, Janosz M. Use of research-based information by school practitioners and determinants of use: A review of empirical research. Evidence & Policy. 2012;8:285–309. doi: 10.1332/174426412X654031. [DOI] [Google Scholar]
- Everton T, Galton M, Pell T. Teachers’ perspectives on educational research: knowledge and context. Journal of Education for Teaching. 2000;26:167–182. [Google Scholar]
- Every Student Succeeds Act (ESSA) of 2015, 20 U.S.C.A. § 6301 et seq. U.S. Government Publishing Office; 2015. [Google Scholar]
- Farley-Ripple EN. Research use in school district central office decision making: A case study. Educational Management, Administration, and Leadership. 2012;40:786–806. doi: 10.1177/1741143212456912. [DOI] [Google Scholar]
- Finnigan KS, Daly AJ, Che J. Systemwide reform in districts under pressure: The role of social networks in defining, acquiring, using, and diffusing research evidence. Journal of Educational Administration. 2013;51:476–497. doi: 10.1108/09578231311325668. [DOI] [Google Scholar]
- Gottfredson DC, Cook TD, Gardner FEM, Gorman-Smith D, Howe GW, Sandler IN, Zafft KM. Standards of evidence for efficacy, effectiveness, and scale-up research in prevention science. Prevention Science. 2015;16:893–926. doi: 10.1007/s11121-015-0555-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Greenberg MT, Weissberg RP, O’Brien MU, Zins JE, Fredericks L, Resnik H, Elias MJ. Enhancing school-based prevention and youth development through coordinated social, emotional, and academic learning. American Psychologist. 2003;58:466–474. doi: 10.1037/0003-066x.58.6-7.466. [DOI] [PubMed] [Google Scholar]
- Hallfors D, Godette D. Will the ‘Principles of Effectiveness” improve prevention practice? Early findings from a diffusion study. Health Education Research. 2002;17:461–470. doi: 10.1093/her/17.4.461. [DOI] [PubMed] [Google Scholar]
- Hsieh H, Shannon SE. Three approaches to qualitative content analysis. Qualitative Health Research. 2005;15:1277–1288. doi: 10.1177/1049732305276687. [DOI] [PubMed] [Google Scholar]
- Hoagwood K, Johnson J. School psychology: a public health framework I. From evidence-based practices to evidence-based policies. Journal of School Psychology. 2003;41:3–21. [Google Scholar]
- Hofferth SL. Changes in American children’s time. Electronic International Journal of Time Use Research. 2009;6(1):26–47. doi: 10.13085/eijtur.6.1.26-47. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hofferth SL, Sandberg How American children spend their time. Journal of Marriage and Family. 2001;63:295–308. [Google Scholar]
- Honig MI, Coburn C. Evidence-based decision making in school district central offices: Toward a policy and research agenda. Educational Policy. 2008;22:578–608. doi: 10.1177/0895904807307067. [DOI] [Google Scholar]
- Hultman G, Hörberg CR. Knowledge competition and personal ambition: A theoretical framework for knowledge utilization and action in context. Science Communication. 1998;19:328–348. [Google Scholar]
- Humble AM. Technique triangulation for validation in directed content analysis. International Journal of Qualitative Methods. 2009;8:34–51. [Google Scholar]
- Kena G, Hussar W, McFarland J, de Brey C, Musu-Gillette L, Wang X, Zhang J, Rathbun A, Wilkinson-Flicker S, Diliberti M, Barmer A, Bullock Mann F, Dunlop Velez E. The Condition of Education 2016 (NCES 2016-144) U.S. Department of Education, National Center for Education Statistics; Washington, DC: 2016. Retrieved October 11, 2016 from http://nces.ed.gov/pubsearch. [Google Scholar]
- Kennedy MM. The connection between research and practice. Educational Researcher. 1997;26:4–12. [Google Scholar]
- Kratochwill TR, Stoiber KC. Diversifying theory and science: Expanding the boundaries of empirically supported interventions in school psychology. Journal of School Psychology. 2000;38:349–358. [Google Scholar]
- Latham G. Do educators use the literature of the profession? National Association of Secondary School Principals Bulletin. 1993;77:63–67. [Google Scholar]
- Long ACJ, Sanetti LMH, Collier-Meek MA, Gallucci J, Altschaefl M, Kratochwill TR. An exploratory investigation of teachers’ intervention planning and perceived implementation barriers. Journal of School Psychology. 2016;55:1–26. doi: 10.1016/j.jsp.2015.12.002. [DOI] [PubMed] [Google Scholar]
- Miretsky D. A view of research from practice: Voices of teachers. Theory into Practice. 2007;46:272–280. doi: 10.1080/00405840701593857. [DOI] [Google Scholar]
- Neal JW, Neal ZP, Kornbluh M, Mills KJ, Lawlor JA. Brokering the research-practice gap: A typology. American Journal of Community Psychology. 2015a;56(3/4):422–435. doi: 10.1007/s10464-015-9745-8. [DOI] [PubMed] [Google Scholar]
- Neal ZP, Neal JW, Lawlor JA, Mills KJ. Small worlds or worlds apart? Using network theory to understand the research-practice gap. Psychosocial Intervention. 2015b;24:177–184. doi: 10.1016/j.psi.2015.07.006. [DOI] [Google Scholar]
- Neal ZP, Neal JW, Mills KJ, Lawlor JA. Making or buying evidence: Using transaction cost economics to understand decision making in public schools. Evidence & Policy. doi: 10.1332/174426416X14778277473701. in press. [DOI] [PMC free article] [PubMed]
- No Child Left Behind (NCLB) Act of 2001, 20 U.S.C.A. § 6301 et seq. West; 2003. [Google Scholar]
- Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration and Policy in Mental Health and Mental Health Services Research. 2015;42:533–544. doi: 10.1007/s10488-013-0528-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Reinke WM, Stormont M, Herman KC, Puri R, Goel N. Supporting children’s mental health in schools: Teacher perceptions of needs, roles, and barriers. School Psychology Quarterly. 2011;26:1–13. doi: 10.1037/a0022714. [DOI] [Google Scholar]
- Ringeisen H, Henderson K, Hoagwood K. Context matters: Schools and the “research to practice gap” in children’s mental health. School Psychology Review. 2003;32:153–168. [Google Scholar]
- Rogers EM. Diffusion of innovations. 4. New York: The Free Press; 1995. [Google Scholar]
- Rones M, Hoagwood K. School-based mental health services: A research review. Clinical Child and Family Psychology Review. 2000;3:223–241. doi: 10.1023/a:1026425104386. [DOI] [PubMed] [Google Scholar]
- Tseng V. The use of research in policy and practice. Society for Research on Child Development Social Policy Report. 2012;26:1–23. [Google Scholar]
- U.S. Census Bureau Population Division. Annual Estimates of the Resident Population: April 1, 2010 to July 1, 2015. 2016 Retrieved from http://factfinder.census.gov.
- U.S. Census Bureau. 2010–2014 American Community Survey 5-Year Estimates. 2014 Retrieved from http://factfinder.census.gov.
- U.S. Department of Education. Safe and Drug-free Schools Program: Notice of final principles of effectiveness. Federal Register. 1998;63:29902–29906. [Google Scholar]
- Walker HM. Commentary: Use of evidence-based interventions in schools: Where we’ve been, where we are, and where we need to go. School Psychology Review. 2004;33:298–407. [Google Scholar]
- Weiss CH. Have we learned anything new about the use of evaluation? American Journal of Evaluation. 1998;19:21–33. [Google Scholar]
- Weiss CH, Bucavalas MJ. Social science research and decision-making. New York: Columbia University Press; 1980. [Google Scholar]
- Williams D, Coles L. Teachers’ approaches to finding and using research evidence: An information literacy perspective. Educational Research. 2007;49:185–206. [Google Scholar]
- Wisdom JP, Chor KHB, Hoagwood KE, Horwitz SM. Innovation adoption: A review of theories and constructs. Administration and Policy in Mental Health and Mental Health Services Research. 2014;41:480–502. doi: 10.1007/s10488-013-0486-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
