Abstract
This study draws on two communities theory to address two major research questions related conceptions of research in educational practice and policy. First, how do educators conceptualize research? Second, to what extent do educators’ conceptions of research align with recent U.S. federal educational policies? We conducted 90 semi-structured interviews with educators in the United States, asking them what comes to mind when they think of research. We used open, axial, and selective coding to characterize educators’ conceptions of research. We also compared educators’ conceptions of research to two U.S. federal educational policies that define scientifically based research and evidence-based interventions. Findings indicate that educators and policies defined research in similar ways, but each included some unique characteristics. Implications from the study include the need for increased communication between federal policy-makers and educators and improved reporting by researchers to better attend to the needs of educators and policymakers.
Keywords: definitions of research, research evidence, educators, educational policy
INTRODUCTION
Some education researchers are producing research informed by practice and some practitioners are using practices informed by research. However, there remains a research-practice gap (e.g., Farley-Ripple, May, Karpyn, Tilley, & McDonough, 2018; Neal, Neal, Kornbluh, Mills, & Lawlor, 2015). This gap is marked by a communication deficit between researchers and practitioners, where there is limited uptake of practitioners’ needs and expertise in research settings, and limited uptake of educational research in practice settings (Neal et al., 2015). Several factors contribute to this gap, including the limited accessibility of research, limited relevance to student needs, and lack of resources (Bartels, 2003; Barwick, Barac, Akrong, Johnson, & Chaban, 2014; Coburn & Talbert, 2006; Long et al., 2016; Farley-Ripple, 2012; Farley-Ripple et al., 2018; Malin, Brown, & Trubceac, 2018; Neal et al., 2015; Neal, Neal, Lawlor, Mills, & McAlindon, 2018; Neal, Mills, McAlindon, Neal, & Lawlor, in press).
In the United States, federal policies including the No Child Left Behind Act of 2001 (NCLB) and the Every Student Succeeds Act (ESSA) encourage educators to use research or evidence to aid in improving student outcomes and support equal educational opportunities for all students (ESSA, 2015; NCLB, 2002). However, in addition to the research-practice gap noted above, there may also be a practice-policy gap wherein practitioners do not engage in the research use encouraged or required by policy. One potential explanation for such a gap is that the policy and practitioner communities are speaking different languages, that what practitioners mean by “research” is different from what policies mean by “research” (e.g., Hill, 2001; Spillane, Reiser, & Riemer, 2002). Accordingly, work is needed to understand how educators conceptualize research and how these conceptions align with policies requiring its use (see Davidson, Penuel, & Farrell, 2018; Finnigan, Daly, & Che, 2013; Penuel, Farrell, Allen, Toyama, & Coburn, 2018; Joram, 2007). Understanding how educators conceptualize research provides insight about how educators are locating, evaluating, and applying research to advance educational efforts, and may identify opportunities for additional training. Moreover, exploring how educators’ conceptions of research align with federal educational policies may identify potential mismatches between the aspects of research that are important to educators and in policy.
This study explores how educators conceptualize research, and the extent to which their conceptions of research align with recent U.S. federal educational policies. Drawing on Two-Communities theory (Caplan, 1979; Farley-Ripple et al., 2018), we argue that educators and policymakers represent separate groups with their own conceptions of research, which would lead to a practice-policy gap. Through semi-structured interviews with 90 K-12 public school educators throughout the U.S. State of Michigan, we show that educators’ conceptions of research are quite broad, and only partially align with federal policy definitions. We conclude with a discussion of the findings on educators’ conceptions of research, and their implications for federal education policymakers and researchers.
LITERATURE REVIEW
Two-Communities Theory and the Practice-Policy Gap
Originating in the knowledge utilization literature, two-communities theory provides a rationale for exploring educators’ conceptions of research, and whether these conceptions align with U.S. federal education policies (Caplan, 1979; Farley-Ripple et al., 2018). Caplan (1979) aimed to describe why government policymakers often fail to use social science research. He argued that the users and producers of research often work in disconnected social communities where their limited boundary crossing (e.g. Penuel, Allen, Coburn, and Farrell, 2015) leads to limited overlap in definitions and expectations (see also Green, Ottoson, Garcia, & Hiatt, 2009; Lomas, 2007; Neal et al., 2015; Nutley, Walter, and Davies, 2003). Although recent work suggests there may be more communication between the users and producers of research than two-communities theory originally implied (e.g., Newman, Cherney, & Head, 2016), two-communities theory continues to provide a useful explanation for mismatches between these groups. For example, Farley-Ripple et al. (2018) used two-communities theory to show that educators tend to emphasize the demographic fit of research to their own context, while researchers tend to emphasize internal validity and research design.
While two-communities theory has been applied to explain the research-practice gap (e.g., Caplan, 1979; Farley-Ripple et al., 2018; Newman et al., 2016), it is also helpful for understanding a practice-policy gap between the users of educational policy (e.g., educators) and those who enact it (e.g., federal policymakers). Like educators and researchers, educators and policymakers work in different communities with different definitions and expectations (Locock & Boaz, 2004). Supporting this idea, a number of studies of educators’ sense-making suggest that educators interpret the language and messages in educational policy based on their own experiences and values, and in ways that are different from policymakers’ original objectives (e.g., Hill, 2001; Spillane et. al., 2002). Therefore, it is possible that educators and federal policymakers have distinct conceptions of research.
Educators can use research in instrumental, conceptual, and symbolic ways (Weiss, 1979; Weiss & Bucuvalas, 1980). First, instrumental use of research (i.e., direct use) occurs when educators use research to solve specific problems or make specific decisions. Second, conceptual use of research (i.e., enlightenment or indirect use) occurs when educators use research broadly to inform their thinking on a topic. Third, symbolic use of research (i.e., political or tactical use) occurs when educators use research to retroactively defend already existing decisions or actions. Although district and building educators report using research in all three ways (e.g., Cain, 2015; Coburn, Toure, & Yamashita, 2009; Penuel et al., 2017; Weiss, Murphy-Graham, & Birkeland, 2005), federal policies tend to emphasize instrumental use of research (e.g., Penuel et al., 2017). Thus, it is possible that federal policymakers take a narrower stance than educators in their conceptualization of research, focusing policies on qualities of research that are useful for solving specific problems or driving decision-making (e.g., study design, reliability and validity of the findings, outcomes). To determine what factors might lead to a practice-policy gap, it is critical to understand both how educators conceptualize research, and the extent to which their conceptions align with those outlined in federal policy.
Educators’ Conceptions of Research
Educators’ conceptions of research often include data such as standardized tests scores and student performance data, while they less often think about peer-reviewed empirical studies. For instance, Finnigan et al. (2013) found that high school educators in low performing schools relied more heavily on data use and equated research with standardized test scores. They used research to describe inquiring about other schools’ practices, but rarely to describe research studies, which they perceived as not fitting their school’s context. Consistent with Joram’s (2007) findings, although published research was considered nearly as credible as student data, fewer than one-third of these educators consulted scholarly or practitioner journals.
In contrast, both Honig and Coburn (2008) and Farley-Ripple (2012) found that district administrators consulted a range of evidence including student data (e.g., standardized test scores), but also site observations, social science research, and evaluation. Administrators with well-developed conceptions of research based the quality of research on scientific and theoretical rigor, whereas those with less developed conceptions focused exclusively on a single factor like the demographic fit between a research study and the school context or the researcher’s reputation (Coburn and Talbert, 2006). More recent studies have focused on identifying educators’ conceptions of useful research, finding that educators prefer practical guide books to empirical studies (Penuel et al., 2018) and that research is most useful when educators viewed it as compatible with their existing practice and could be readily observed it in use elsewhere (Neal et al., 2018).
Based upon theory and the literature we expect educators’ conceptions of research to be broad, reflecting a range of types of evidence (e.g., local data, other schools’ programs and practices, empirical studies) and sources (e.g., books, practitioner journals, peer-reviewed journals). We might also expect educators to consider aspects of reliability, validity, fit (e.g., sample overlap with student body demographics), and credibility (e.g., the reputation of the researcher) in their conceptions of research. Aiming to capture this breadth, Reis-Jorge (2007) found that teachers conceptions of research were either functional and focused on what it does, or structural and focused on how it was produced.
Conceptions of Research in Federal Educational Policies
Federal, state, and local educational policies govern educators’ work. In this paper, we focus on federal educational policies because these apply broadly across public education, and restrict our focus on U.S. policies because only these are likely to influence the U.S.-based educators in our sample (for national education policy outside the US, see Blackmore, 2002; Canada: Campbell, Pollock Briscoe; Carr-Harris & Tuters, 2017). We specifically focus on two policies, NCLB and ESSA, which are both reauthorizations of the Education and Secondary Education Act (ESEA) originally passed in 1965 to increase equity in public schools through the distribution of federal funds (Kantor, 1991). Since its original passage, reauthorizations of the law have shifted its focus areas (as we describe below), but the charge to distribute funds remains the same (Farley-Ripple, May, Karpyn, Tilley, & McDonough, 2018).
For many public schools, funds associated with federal policies like NCLB or ESSA (e.g., Title 1 – Improving the Academic Achievement of the Disadvantaged) significantly contribute to their operating budget. Access to these funds requires that educators engage in instrumental use of research to make decisions about programs and practices, but differ in the details: NCLB emphasizes the use of scientifically-based research, while ESSA emphasizes the use of evidence-based interventions (Farley-Ripple et al., 2018). The NCLB defined scientifically-based research as:
…the application of rigorous, systematic, and objective procedures to obtain reliable and valid knowledge relevant to education activities and programs; and (B) includes research that—(i) employs systematic, empirical methods that draw on observation or experiment; (ii) involves rigorous data analyses that are adequate to test the stated hypotheses and justify the general conclusions drawn; (iii) relies on measurements or observational methods that provide reliable and valid data across evaluators and observers, across multiple measurements and observations, and across studies by the same or different investigators; (iv) is evaluated using experimental or quasi experimental designs in which individuals, entities, programs, or activities are assigned to different conditions and with appropriate controls to evaluate the effects of the condition of interest, with a preference for random-assignment experiments, or other designs to the extent that those designs contain within-condition or across-condition controls; (v) ensures that experimental studies are presented in sufficient detail and clarity to allow for replication or, at a minimum, offer the opportunity to build systematically on their findings; and (vi) has been accepted by a peer-reviewed journal or approved by a panel of independent experts through a comparably rigorous, objective, and scientific review
(115 STAT. 1964).
Following this definition, a randomized control trial to test the efficacy of an instructional practice is an example of research, while a published collection of program testimonials about the same instructional practice is not.
The ESSA shifted the focus from scientifically-based research to evidence-based interventions, which was purported to increase the implementation of effective interventions and to improve outcomes. ESSA defined an evidence-based intervention as: “… an activity, strategy, or intervention that – (i) demonstrates a statistically significant effect on improving student outcomes or other relevant outcomes” (p. 388). Following this definition, a character education program that shows positive statistically significant effects on behavior is an example of an evidence-based intervention, while a school-developed student peer-to-peer mentoring program solely evaluated via program attendance is not. ESSA further outlines criteria used to evaluate evidence-based interventions:
(I) strong evidence from at least one well-designed and well-implemented experimental study; (II) moderate evidence from at least one well-designed and well-implemented quasi-experimental study; or (III) promising evidence from at least one well-designed and well-implemented correlational study with statistical controls for selection bias; or (ii) (I) demonstrates a rationale based on high-quality research findings or positive evaluation that such activity, strategy, or intervention is likely to improve student outcomes or other relevant outcomes; and (II) includes ongoing efforts to examine the effects of such activity, strategy, or intervention
(p. 388)
These criteria focus on evaluating components of scientifically-based research studies (e.g., design, sample size, and effects), and favor those studies with experimental or quasi-experimental designs that allow for causal inference.
To date, there is limited research examining the alignment of educators’ conceptions of research to those outlined in federal educational policies. One exception is Davidson et al. (2018), who found that when research was pre-defined, educators in a nationally representative sample named multiple sources of ‘useful’ research: books (57%), research/policy reports (16%), journal articles (13%), and other sources (14%). However, of the unique sources with original analyses identified by educators, less than 20% of sources met criteria for strong, moderate, or promising evidence as outlined in ESSA. This finding supports two-communities theory’s proposition that educators and policymakers represent different communities, each with differing understandings about what research is. However, more research is needed to determine exactly where this mismatch (or lack of alignment) occurs.
METHOD
Sample and Data Collection
Data for this study were collected from public school building-, district-, and intermediate school district (ISD)-level educators throughout Michigan, within the context of a larger study. Both a snowball referral strategy and a social network-based strategy were used to recruit participants. The snowball recruitment strategy began by interviewing the superintendent of each of two intermediate school districts, with an invitation to suggest other study participants at the ISD, district, and building levels. The suggested participants were invited to participate in an interview, and were also invited to suggest participants. This yielded 34 interview participants from referrals originating in one county, and 40 from referrals originating in another county. Separately, the network-based strategy began by using a relational chain design to identify educators who serve as knowledge brokers. A random sample of Michigan principals and superintendents were asked who they talk to for information about school programs. Those named as sources of information were contacted and asked the same question, and this process was repeated up to 11 times, thereby identifying chains of knowledge brokers. From all educator knowledge brokers located using this approach, to capture perspectives from elsewhere in the state, we purposively sampled 24 from 17 counties to participate in interviews. Together, these strategies yielded a sample of 98 educator interviews.
Here we focus on an analytic sample of 90 interviews with educators across 21 counties, 19 districts, and 16 school buildings.1 The sample was primarily white (N = 87.78%), non-Hispanic (N = 97.8%), and female (N = 67.78%). Interviewees had worked in their current district for an average of 11.39 years, and in their current job for an average of 7.21 years, holding a range of academic qualifications (BA = 42.22%, MA = 23.33%, Ph.D., Ed.D., or Professional = 34.44%; see Table 1).
Table 1.
Participant Demographics
| Participants (N= 90) | |||
|---|---|---|---|
| Race | |||
| White | 79 | (87.78%) | |
| Black or African American | 8 | (8.89%) | |
| Pacific Islander | 1 | (1.11%) | |
| Missing | 2 | (2.22%) | |
| Ethnicity | |||
| Hispanic or Latino(a) | 2 | (2.22%) | |
| Sex | |||
| Female | 61 | (67.78%) | |
| Male | 29 | (32.22%) | |
| Level | |||
| ISD | 37 | (41.11%) | |
| District | 24 | (26.67%) | |
| Building | 29 | (32.22%) | |
| Education | |||
| Bachelors | 38 | (42.22%) | |
| Master’s (or Ed.S) | 21 | (23.33%) | |
| Ed.D | 12 | (13.33%) | |
| Ph.D | 17 | (18.89%) | |
| Other professional (JD, MD) | 2 | (2.22%) | |
| Min | Mean | Max | |
| No. of Districts | 0 | 3.52 | 30 |
| District Tenure (in years) | .06 | 11.39 | 43 |
| Job Tenure (in years) | .25 | 7.21 | 30 |
Interviews were conducted between Fall 2015 and Spring 2016, and were conducted in-person (N = 79) and by phone (N = 11). All interviews were recorded and transcribed with the consent of the interviewee, who each received a $30 Amazon.com gift card. The interview focused on how school districts acquired information about school programs, interventions, policies, and practices, and the role that different people and organizations played in the process. However, the current study focuses narrowly on educators’ responses to one question that elicited many thoughts: When you hear the word “research” in the context of school-based programs, what kinds of things do you think of?
Data Analysis
To address the first research question (i.e., How do educators conceptualize research?), open, axial, then selective coding were used to analyze the data. First, open coding involves examining, comparing, conceptualizing, and categorizing data (Creswell, 2014). In this step, two coders independently reviewed each interview to gain additional familiarity with the data.2 Quotes or phrases that captured participants’ conceptualization(s) of research were identified and initial codes were generated. Each interview could receive more than one code. The two coders met to discuss, revise, and reach consensus on initial codes. Next, axial coding involves grouping or theming data based upon patterns in the initial codes. In this step, the two coders met to group initial codes, focusing on codes that were observed across at least 10% (n = 9) interviews, then they met to discuss, revise, and reach consensus on broad themes. Initial codes were grouped to create 2 broad themes, Research Process and Research Products, which closely mirror Reis-Jorge’s (2007) distinction between structural and functional aspects of research (see Table 2). Finally, selective coding involves identifying and describing each theme and their interrelationships (Creswell, 2014). In this step, the two coders met to identify the definition for each theme, supported by example quotes from the data.
Table 2.
Educators’ Conceptions of Research Themes
| Category | Definition | |
|---|---|---|
| Process | How research is conducted, including who conducts the research, how research is designed, and how data is collected. | |
| Investigator | The person or organization who conducts a systematic investigation | |
| Design | The logical structure of systematic inquiry guided by a research question | |
| Methods | The process and tools used to collect information to investigate a research problem/question | |
| Implementation | The process implementing an intervention or practice to achieve a desired outcome | |
| Products | What results from the research process, including data and outcomes, as well as educators’ evaluations of these products (e.g., fit and credibility). | |
| Data | Information from systematic investigation | |
| Outcomes | The documented results of a research study or implementation effort | |
| Fit | The degree to which research is compatible with school context | |
| Research conducted, promoted or disseminated by a known reputable, trusted source. |
To address the second research question (i.e., To what extent do educators’ conceptions of research align with recent U.S. federal educational policies?), the two coders coded the NCLB and ESSA definitions of research, aligning them with the codes identified in their analysis of educator interviews when possible (see Table 3). For example, the NCLB scientifically-based research definition clause “relies on measurements or observational methods that provide reliable and valid data” was coded as “Data,” which had previously been identified as a code in the educator interviews. Then the coders reviewed the policy definitions to identify key concepts that did not fit any of the codes identified in the educator interviews, and developed an additional set of codes. For example, NCLB discusses the need to “test the stated hypotheses,” but because no educators had mentioned hypotheses in their interviews, an applicable code did not already exist. In such cases, a new code was created. This two-stage policy coding process ensured that alignments between educators’ conceptions of research and policy definitions could be identified, but that elements of policy definitions not present in educators’ conceptions could also be identified. The final set of codes was grouped into two larger themes: research process and research products.
Table 3.
Alignment of Educators’ Conceptions of Research with NCLB and ESSA Policy Definitions
| THEME | CATEGORY | EXAMPLE QUOTE | # OF INTERVIEWS MENTIONING | POLICY EXCERPT | # OF INTERVIEWS ALIGNED |
|---|---|---|---|---|---|
| Educators’ Conceptions of Research | Federal Policy | ||||
| PROCESS | 45 | 23 | |||
| Hypotheses | involves rigorous data analyses that are adequate to test the stated hypotheses and justify the general conclusions drawn (NCLB) | 0 | |||
| Investigator |
|
10 | |||
| Design |
|
17 |
evaluated using experimental or quasi experimental designs in which individuals, entities, programs, or activities are assigned to different conditions and with appropriate controls to evaluate the effects of the condition of interest, with a preference for random-assignment experiments, or other designs to the extent that those designs contain within-condition or across-condition controls; (NCLB)
(I) strong evidence from at least one well-designed and well-implemented experimental study; (II) moderate evidence from at least one well-designed and well-implemented quasi-experimental study; or (III) promising evidence from at least one well-designed and well-implemented correlational study with statistical controls for selection bias (ESSA) |
9 | |
| Methods |
|
13 | (ii) involves rigorous data analyses that are adequate to test the stated hypotheses and justify the general conclusions drawn; (iii) relies on measurements or observational methods that provide reliable and valid data across evaluators and observers, across multiple measurements and observations, and across studies by the same or different investigators;… (v) ensures that experimental studies are presented in sufficient detail and clarity to allow for replication or, at a minimum, offer the opportunity to build systematically on their findings (NCLB) | 13 | |
| Implementation |
|
15 | |||
| PRODUCTS | 74 | 47 | |||
| Data |
|
22 | relies on measurements or observational methods that provide reliable and valid data (NCLB) | 0 | |
| Outcomes |
|
47 |
relevant to education activities and programs (NCLB)
likely to improve student outcomes or other relevant outcomes; and (II) includes ongoing efforts to examine the effects of such activity, strategy, or intervention (ESSA) |
47 | |
| Fit |
|
22 | |||
| Credibility |
|
17 | has been accepted by a peer-reviewed journal or approved by a panel of independent experts through a comparably rigorous, objective, and scientific review (NCLB) | 6 | |
The final column of Table 3 reports, for each code, the number of interviews in which the educators’ conception of research aligned with the policy definition. We did not expect any instances of verbatim alignment, that is, that educators would use the exact policy language in their interview responses. Therefore, an educator’s conception is counted as aligned with policy here when the educator mentioned one or more elements of the relevant policy excerpt. For example, although ESSA provides very detailed guidance about appropriate research designs, educators were counted as having policy-aligned conceptions regarding design if they discussed one or more of the design features (e.g. random assignment, experimental design, control group) identified by the policy.
RESULTS
Educators had broad conceptions of research that were grouped around two major themes: research process (e.g., design, methods, etc.) and research products (e.g., data, outcomes, etc.). Process refers to how research is conducted, including who conducts the research, how research is designed, and how data is collected. Products refer to what results from the research process, including data and outcomes, as well as educators’ evaluations of these products (e.g., fit and credibility). We find that educators often talked about multiple aspects of process and products, and there was variation in the ways in which they discussed them. These same themes emerged from the NCLB and ESSA policies, but their scope was more focused. For instance, within the research process, while educators discussed a range of different research designs (e.g. meta-analysis, evaluations, experimental and quasi-experimental designs), policies focused more narrowly on experimental or quasi-experimental designs.
How do educators conceptualize research?
Process
In total, 45 (50%) of interviews discussed one or more categories of the research process, however each category appeared in only 10 (11.11%) to 17 (18.89%) interviews.
Investigators
Investigators refers to the person or organization who conducts a systematic investigation. Investigators were mentioned in 10 (11.11%) interviews. Educators discussed various types of investigators, including universities or university employees who produced empirical research (n = 5), and teachers and principals who were involved in school or district-based research that could directly inform school-level practices (n = 4). For example, one educator shared:
You have your purely academic research that you might find in a peer reviewed journal ….at the same time we also like to look at action research and build capacity within teachers and principals to do that kind of research within their own districts and communities…
Educators also mentioned researchers and practitioners as co-investigators in research-practice partnerships (n = 4) They noted the desire for university researchers to understand and incorporate problems of practice in their work, and the desire for researchers to be involved in practice settings to help address concerns of practitioners. For instance, one educator shared:
I want the researcher at that table while we sit there and go oh my gosh now what, right. and I want the researcher there because I want the researcher to understand a genuine problem of practice. Does that make sense?
Design
Design refers to the logical structure of systematic inquiry guided by a research question and was mentioned in 17 (18.89%) interviews. While design appeared in a number of interviews, there was great variation in the specific types of designs that were discussed. Several educators (n = 5) identified meta-analysis as a significant research design for them because it includes a process of synthesizing and identifying important information from existing research:
We try to ascribe to the scientifically-based research and definition, where, you know, the studies are done basically with a code that, you know are required and that it includes more than one piece of research. So, a meta-analysis. So, you know, when you think of the [author] work, which I think has really taken us to another level of how we look at the, you know, research for teachers.
A few educators described evaluations as a type of design (n = 3). Other examples of design included experiments or quasi-experiments (n = 2) or mentions of specific features typically associated with designs like controlled studies (n = 6) or pre-post tests (n = 4):
So that might be you know something that’s on the promising practices list or the Department of Ed’s best practices list… something that has, maybe its not you know a double blind random control study but something that has some evidence base behind it that shows some kind of results.
Another educator shared:
“so, again, I think of someone who’s had a control group, someone who’s had- you know, and then measure the control group and measure the strategy and seeing if there’s a difference in outcomes.”
Methods
Methods refers to the process and tools for collecting information to investigate a research problem or question and was mentioned in 13 (14.44%) interviews. Educators focused on several different aspects of research methods including sample size (n = 3), data collection (n = 4) and data analysis (n = 3). When discussing sample size, educators tended to equate larger samples with higher quality evidence:
I’m not interested in very small studies unless we can combine those studies to you know an aggregate or something like that. So I need to be able to count on the information I’m going to get from the research that will help me understand that whatever it is we’re studying is going to be effective for students.
They also focused on conventional modes of data collection (e.g. surveys) that closely resemble the methods used to collect performance data from students (e.g. standardized tests):
I think of surveys…that would be a parent survey, that would be a student survey, it’d be a staff survey, and community survey, all the stakeholders involved.
Implementation
Implementation refers to the process of implementing an intervention or practice to achieve a desired outcome and was mentioned in 15 (16.67%) interviews. Educators often discussed fidelity, occasionally only in passing (n = 3), but more often to emphasize importance of contextual factors in implementation (n = 5):
I’ve always worked in grants where you had to say obviously do everything with fidelity, which I totally agree with, but sometimes when you’re choosing an evidence-based strategy, there is little room for flexibility and sometimes what is a part of that strategy may not work with the target population you’re dealing with. So not every strategy a hundred percent fits every population.
Educators also frequently discussed wanting to see implementation in action (n = 6), for example, as evidence that implementing a program was possible and effective:
For me research is again seeing the program in action at a school, being able to go to school districts websites to find out if the program worked or if it didn’t. in other words, hearing the success stories.
Products
In total, 74 (82.22%) interviews mentioned one or more categories of research products, however each category appeared in only 17 (18.89%) to 47 (52.22%) interviews
Data
Data refers to the information that comes from systematic investigation and was mentioned in 22 (24.44%) interviews. Participants were consistent in the way they discussed data, frequently describing it as an intermediate product that facilitated development of other types of products, like outcomes or evidence-based practices. In most cases participants mentioned data in general terms, for example noting that:
They [Michigan Department of Education multi-tiered system of support] have a wealth of data that is based upon research-based practices that shows marked increases in student achievement.
However, when participants were more specific about the type of data, they were focused on quantitative data (n = 3):
I think of data. I think of stats. I think of looking at the data, looking at the impact of what is done from a climate perspective.
Outcomes
Outcomes refer to the documented results of a research study or an implementation effort and were mentioned in 47 (52.22%) interviews, more often than any other aspect of research. Educators’ mentions of outcomes included student achievement, student behavior, and school climate, and at times were the first thing that came to mind in their conceptions of research: “[I think of] student outcomes. That’s it.” Indeed, some educators saw demonstrated outcomes as the part of research makes it useful for decision-making around the adoption of programs or practices:
so it’s always helpful to know that yes they’ve tested it here and there and students reported this and that and therefore there was a x percentage of gain or loss or whatever you’re- so to see facts and figures is helpful in deciding.
Thus, for educators, whether research or the use of an evidence-based intervention yields desirable outcomes is a key feature of what research is.
In many cases (n = 15), educators’ discussions of research outcomes focused specifically on evidence-based practices, which they sometimes also called best practices: “I’m thinking of like best practices, things that have been known to work, not necessarily something you purchase.” Educators described challenges in identifying best practices because of the need to interpret research activities as part of the process:
But I also know teachers aren’t researchers, so we have a hard time understanding what is a best practice because you can find research studies with really small sample sizes, research studies with really small effect sizes and be misled, which is why the [author] work was- with the [book] was really good cause it was research on research. So he looked at what does all the research say on class size, not just one…
This quote also suggests that certain ways of presenting findings may be important for helping educators to identifying best practices. Things like meta-analyses or research summaries that evaluate available evidence can reduce burden on teachers to interpret research.
Fit
Fit refers to the degree to which research is compatible with school or district context and was discussed in 22 (22.44%) interviews. Educators mentioned research being compatible with context (n = 10), needs (n = 6), resources (n = 3) and demographics (n = 7). In some cases, educators focused on the need for specificity, describing how research may not be appropriate for all contexts, and about challenges locating research that was conducted on populations demographically similar to their own:
I look at what the research was done on, like what kind of demographics and would it be transferrable to our population. So that’s something to always keep in mind.
However, in other cases, educators seemed to recognize the value of research that is generalizable, and discussed considering whether the research is transferable to multiple populations:
you know again you have to look at the populations and everything that the research was done on to make sure that it is you know transferable to any population or what population it, you know the research was done on.
Credibility
Credibility refers to research conducted, promoted or disseminated by a reputable or trusted source and was mentioned in 17 (18.89%) interviews. Educators most often thought about research from universities (n = 4), and occasionally referred to research conducted by specific individuals (n = 3). Participants considered the source when consuming research as a critical factor in evaluating its quality, in some cases placing it above more conventional markers of quality such as peer review:
So even if it has not been published formally in academia, I think that it carries more weight when you can say this person works at [university 1 or university 2] and they’ve found this to be a valid and reliable program, that goes farther. However, I can also go down the hall and talk to five different teachers about practices that are evidence-based that they see working and it may not be a formalized product or program and to me there’s a lot of value in that and that goes back to teachers trusting each other…
Group differences
Although we focused on conceptions of research held by educators who play a role in selecting and deciding to use school-based programs, this can still represent quite a diverse range of individuals working in different roles and in different local contexts. These differences may systematically impact how educators conceptualize research. To explore this possibility, we compared the conceptions held by educators in different groups in three ways. First, we compared educators working in each of the two counties where the majority of our sample was drawn. Second, we compared educators working at different levels of the public education system: county-level intermediate school districts, local school districts, and individual school buildings. Third, we compared educators in executive roles (i.e. principals and superintendents) to those in non-executive roles. We did not observe significant differences between any of these groups in the frequency with which educators mentioned either the process or product aspects of research. This finding of no group differences lends support for viewing educators as a single community within the lens of two-communities theory.
To what extent do educators’ conceptions of research align with NCLB and ESSA?
Many of the components of educators’ conceptions of research also appear in policy definitions of research, however we observed only partial alignment between how educators discuss these components and how they appear in policy. Typically, educators’ conceptions were broader and more inclusive, while policy definitions were narrow and precise. Additionally, educators’ conceptions of research included some components that do not appear in policy, and policy definitions refer to concepts that educators did not mention. Thus, providing some support for a practice-policy gap and two-communities theory, we find that educators’ conceptions of research align only partially with federal policy definitions.
Process
Like educators, NCLB and ESSA also discuss both design and methods as key features of research. Policy definitions of research design were quite detailed and specific, focusing on a narrow set of experimental or quasi-experimental designs, while not referencing and thus implicitly excluding other types of designs (e.g. meta-analysis, descriptive, etc.). As a result, although educators mentioned design-related features of research in 17 interviews, only 9 of these interviews referred to one or more aspects of design described in policy. Policy definitions of research methods were more general, focusing not on specific methods, but broadly on data collection, multiple measurements across studies, and data analysis. Thus, all 13 educators who discussed methods-related features of research in their interviews referred to one or more aspects of method described in policy. Therefore, educators’ conceptions of research are aligned with policy to the extent that they both consider design and method, but the alignment on these aspects is partial.
These policies’ consideration of the research process was restricted to the actual research activities, however educators’ conceptions of the process of research also included parts of the process that both precede (the identity of the investigator) and follow (implementation) the research activities themselves. For policies defining research, these parts of the process fall outside the scope of what research is; research depends on what is done, but not who is doing it, or whether the research is subsequently used. This highlights a misalignment between educators and policy, with educators viewing research as a more encompassing process.
Similarly, NCLB defines research as a hypothesis-driven process, that is, a set of activities designed to test hypotheses and thereby draw general conclusions. However, none of the educators in this sample mentioned the role of hypotheses or hypothesis testing when describing their understanding of research. This highlights a second misalignment between educators and policy, with educators conceptualizing research more broadly as including a range of empirical activities (i.e. data gathering), but policy more narrowly defining it as a specifically intended to test a priori hypotheses.
Products
Like educators, NCLB and ESSA discuss data, outcomes, and credibility as key features of research. Policy definitions of data focused specifically on reliable and valid data, while educators discuss data more broadly. As a result, although educators mentioned data in 22 interviews, none of those interviews mentioned reliable or valid data. Thus, educators’ conceptions of data did not align with policy definitions. Policy definitions of outcomes were more general, focusing on outcomes relevant to educational activities and programs, and those that improve student outcomes or other relevant outcomes. Thus, all 47 educators who discussed outcomes in their interviews aligned with policy because they referred to those relevant to educational activities and programs or those that improve student outcomes. Lastly, policy definitions of credibility focused on external review via a peer-review journal or independent experts, while not referencing other ways to establish credibility such as through professional organizations or colleagues. As a result, although educators mentioned credibility of research in 17 interviews, only 6 of these interviews referred to credibility as described in policy. Thus, educators’ conceptions of credibility partially aligned with federal policy definitions.
Policy consideration of research products was restricted to data, outcomes, and credibility. However, educators’ conceptions of research products also included their judgment of those products (i.e., fit). Thus, while educators’ conceptions of research products included not only products of the research itself, but also secondary products, these fell outside the scope of research as defined by policy, and highlights a misalignment between educators and policy.
DISCUSSION
Although U.S. federal policies encourage educators to use research (ESSA, 2005; NCLB, 2002), few studies have focused on how educators conceptualize research and whether their conceptions align with these policies (see Davidson et al., 2018 for an exception). Indeed, despite a growing literature on gaps among the research, practice, and policy communities in many fields beyond education, it remains rare to explicitly study how each of these communities conceptualize and define research. Two-communities theory (Caplan, 1979; Locock & Boaz, 2004; Farley-Ripple et al., 2018) suggests that because educators and federal policymakers work in different social communities and emphasize different types of research use (Weiss, 1979; Weiss & Bucuvalas, 1980), there may be a practice-policy gap in the ways that these two groups conceptualize research. This study extends prior research and builds upon two-communities theory by examining educators’ conceptions of research among Michigan educators, and assessing how these conceptions align with the NCLB definition of scientifically-based research and ESSA definition of evidence-based interventions. Exploring educators’ conceptions of research and their alignment with federal policy can highlight opportunities for promoting educators’ use of research, and relatedly bridging the practice-policy gap.
Educators’ Conceptions of Research.
In this study, we found that educators’ conceptions of research included two main themes – research process and research products – which did not vary across educators at different levels, in different roles, or in different local contexts. When educators conceptualized research, they were less likely to mention research process than products. Conceptions of the research process included the full range of process activities, starting with the investigators, and including design, methods, and implementation. Conceptions of research products were similarly inclusive, ranging from immediate products (e.g. data and outcomes) to evaluations of those products (e.g. fit and credibility). These findings are consistent with prior literature, which highlighted the broad scope of educators’ conceptions of research (Coburn & Talbert, 2006; Farley-Ripple, 2012; Finnigan et al., 2013; Honig & Coburn, 2008; Joram, 2007). Specifically, related to process, these studies have found that educators focus on broad aspects of design (e.g., empirical investigation), and methods (e.g., psychometric properties and use of multiple measures). However, unlike the past literature, we also found that educators’ conceptions of research include more long-range processes (e.g., implementation). Related to products, these studies have found that educators focus on a range of data and outcomes as well as perceived fit between a research study and school context, and credibility of research or researchers. In our study, educators’ conceptions of the products of research were largely consistent with the past literature.
Alignment with NCLB and ESSA.
At the broadest level, educators’ conceptions of research aligned with the NCLB definition of scientifically-based research and the ESSA definition of evidence-based interventions in that they focused on aspects of both process and products. However, a more detailed look revealed that NCLB and ESSA offer a much narrower conception of the research process and research products than educators, suggesting some evidence of a practice-policy gap. For instance, NCLB and ESSA focused on experimental, quasi-experimental, and correlational designs, whereas educators also considered meta-analytic and evaluation designs. Similarly, NCLB and ESSA focused on reliable and valid data, whereas educators often discuss data in more general terms. Compared to these policies, educators also had a much broader view of what makes research credible. Whereas these policies primarily view credibility as deriving from publication in a peer-reviewed outlet, educators reported that the status of the author (e.g. at a well-known university) or the source of the information (e.g. a trusted colleague) can also serve as markers of credibility. This misalignment in conceptions of credibility may derive from another misalignment: the slowness of research, and the immediacy of educators’ needs. Research can take a long time to make its way through the peer review and publication processes, while educators simply cannot wait and must turn to other indicators of credibility to facilitate their quicker access to research.
Unlike aspects of research such as design and credibility, because NCLB and ESSA have a relatively broad conception of methods and outcomes, their definitions of these aspects of research aligned more closely with educators’ conceptions. Educators’ particularly frequent focus on outcomes, which aligned with policy, is perhaps not surprising because the central aim of these policies is to improve student outcomes. Indeed, the names of the policies themselves – No Child Left Behind and Every Student Succeeds – explicitly reference outcomes in the form of reducing achievement gaps and promoting universal achievement, respectively. Moreover, the ultimate metric against which educators’ adherence to these policies is measured is evidence of improved student outcomes. That is, although these policies advocate research use, this is a proximal goal and there has generally not been a concerted effort to measure or reward educators for using research. Instead, measurement and rewards focus on the more distal goal of improving outcomes. Thus, we observe high degrees of alignment between educators on policy that improving outcomes – “moving the needle” – is critical, but somewhat less alignment on what it means to use research to achieve such a goal.
Implications and conclusions.
These findings have implications for federal policymakers, educators, and researchers. Our finding that educators’ conceptions of research are often misaligned or only partially aligned with definitions of research in education policy suggests the need for more bidirectional communication between policymakers and educators. Federal policymakers can be clearer about what kinds of research “count,” and take steps to help educators identify research that meets this definition and thus fulfills policy requirements. For example, definitions of research that appear in education policies might be accompanied by short practitioner-friendly checklists, like the ESSA summary of recommended study criteria, that can be used to assess when and to what extent a piece of evidence satisfies policy requirements. Additionally, federal policymakers and educators can work together when education policy is being developed to ensure that they are speaking the same language and that the resulting policy documents will be understandable in both the policy and practitioner communities.
Similarly, researchers (especially education researchers) can conduct and disseminate their research in a way that matches the conceptions of both educators and policymakers. Researchers are already attuned to the report features of their work that are essential components of education policy definitions of research like methods (e.g., data collection and analysis). However, researchers should also routinely report things like validity, reliability, and details regarding experimental designs that attend to federal policy definitions of research. Researchers should also attend to the features of their work that matter to educators, but are not mentioned in federal education policy. Specifically, researchers should include information about how to implement research findings and should report the demographic and contextual details that allow educators to assess the findings’ fit and generalizability. Recognizing that policy and practice audiences approach research from different, but overlapping, perspectives and considering these differences when conducting and reporting research can maximize the potential for bridging the research-practice gap. To improve dissemination efforts, it might be helpful to develop checklists that researchers can follow to ensure that their reporting includes components that are consistent with both federal policy and educators’ conceptions of research.
Limitations and Future Directions.
This study represents an initial exploration into educators’ conceptions of research and their alignment with federal education policy, and thus it must be interpreted with some limitations in mind. The sample was restricted to educators in two predominantly white Michigan counties; future studies would benefit from exploring educators’ conceptions of research in other locations and in demographically diverse settings. This study examined the extent to which educators’ conceptions of research aligned with definitions provided by two recent U.S. federal educational policies. However, future research might extend the current study by examining how educators’ conceptions of research align with additional resources provided by policymakers. For example, ESSA includes supplementary non-regulatory guidance for using evidence that highlights aspects like sample size, fit with demographics and setting, and What Works Clearinghouse standards. Future research could also explore how educators’ conceptions align with other educational policies like state or local mandates, and how this localized alignment relates to the research-practice gap. Additionally, this study is focused on United States federal education policy; future research should examine educators’ conceptions of research and their alignment with education policy in other national contexts. Lastly, future research should extend this study by exploring the extent to which differences between educators’ and policymakers’ understandings of research influence how and when educators use research. For example, future studies could determine whether educators who have conceptions of research that are more aligned with federal policy are more likely to engage in instrumental forms of research use that align with expectations in federal policy (e.g., using What Works Clearinghouse to identify programs and practices).
The purpose of this study was to understand how public-school educators conceptualize research, and how these conceptions aligned with recent U.S. federal educational policies, NCLB and ESSA. Through the analysis of interview data, findings suggested that educators’ conceptions of research were broad and multifaceted. Educators most often discussed research outcomes, but to varying degrees discussed many aspects of the research process and the products of that process. These conceptions only partially aligned with definitions of research in federal policy (i.e. NCLB and ESSA), which tended to be much narrower. These findings suggest that educators and policymakers have some overlap in their conceptions of research, but also approach research from different perspectives.
Key messages:
We examine how 90 U.S. educators define research and how these definitions align with federal policy
Educators’ definitions of research reflected two major themes: process and products.
Educators’ definitions of research were broad, while policy definitions were narrow and precise.
Findings have implications for both federal policymakers and educational researchers.
Acknowledgments:
The authors would like to Camren Wilson for his helpful feedback on the study results. We would also like to thank all participating educators for contributing their time and perspectives to this study.
Funding Details:
This study was funded by Officer’s Research Award (#182241) and a Use of Research Evidence Award (#183010) from the William T. Grant Foundation. Additional support for this research also came from an R21 research grant from the National Institute of Mental Health (#1R21MH100238-01A1).
Footnotes
Conflicts of Interest: The authors declare that there is no conflict of interest.
An unexpected data loss event resulted in the loss of a random 9 interviews.
These coders are the first and second authors, who also served as interviewers during the data collection stage, and therefore already had familiarity with the data.
References
- Bartels N, 2003, How teachers and researchers read academic articles. Teaching and Teacher Education, 19, 737–753. [Google Scholar]
- Barwick MA, Barac R, Akrong LM, Johnson S, & Chaban P, 2014, Bringing evidence to the classroom: Exploring educator notions of evidence and preferences for practice change. International Education Research, 2, 4, 1–15. [Google Scholar]
- Blackmore J, 2002, Is it only ‘What works’ that ‘Counts’ in New Knowledge Economies? Evidence-based Practice, Educational Research and Teacher Education in Australia. Social Policy and Society, 1, 257–266. [Google Scholar]
- Cain T, 2015, Teachers’ engagement with research texts: beyond instrumental, conceptual, or strategic use. Journal of Education and Teaching, 41, 5, 478–492. doi: 10.1080/02607476.2015.1105536 [DOI] [Google Scholar]
- Campbell C, Pollock K, Briscoe P, Carr-Harris S & Tuters S, 2017, Developing a knowledge network for applied education research to mobilise evidence in and for educational practice. Educational Research, 59, 209–227. [Google Scholar]
- Caplan N, 1979, The two communities theory and knowledge utilization. The American Behavioral Scientist, 22, 459–471. [Google Scholar]
- Coburn CE, & Talbert JE, 2006, Conceptions of evidence use in school districts: Mapping the terrain. American Journal of Education, 112, 4, 469–495. [Google Scholar]
- Coburn CE, Toure J, & Yamashita M, 2009, Evidence, interpretation, and persuasion: Instructional decision making at the district central office. Teachers College Record, 111, 4, 1115–1161. [Google Scholar]
- Creswell JW, 2014, Research Design: Qualitative, Quantitative and Mixed Methods Approaches. Thousand Oaks, CA: SAGE Publications. [Google Scholar]
- Davidson KL, Penuel WR, & Farrell CC, 2018, What Counts as Research Evidence? How Educational Leaders’ Reports of the Research they Use Compare to ESSA Guidelines. Paper presented at the Society for Research on Educational Effectiveness, Washington, D.C. [Google Scholar]
- Every Student Succeeds Act (ESSA) of 2015, 20 U.S.C.A. § 6301 et seq. (U.S. Government Publishing Office, 2015). [Google Scholar]
- Farley-Ripple EN, 2012, Research use in school district central office decision making: A case study. Educational Management Administration & Leadership, 40, 6, 786–806. [Google Scholar]
- Farley-Ripple EN, May H, Karpyn A, Tilley K, & McDonough K, 2018, Rethinking connections between research and practice in education: A conceptual framework. Educational Researcher 47, 235–245 [Google Scholar]
- Finnigan KS, Daly AJ, & Che J, 2013, Systemwide reform in districts under pressure: The role of social networks in defining, acquiring, using, and diffusing research evidence. Journal of Educational Administration, 51, 4, 476–497. [Google Scholar]
- Hill HC, 2001, Policy is not enough: Language and interpretation of state standards. American Educational Research Journal, 38, 289–318. [Google Scholar]
- Honig MI, & Coburn C, 2008, Evidence-based decision making in school district central offices: Toward a policy and research agenda. Educational Policy, 22, 4, 578–608. [Google Scholar]
- Green LW, Ottoson JM, Garcia C, & Hiatt RA, 2009, Diffusion theory and knowledge dissemination, utilization, and integration in public health. Annual Review of Public Health, 30, 151–174. [DOI] [PubMed] [Google Scholar]
- Joram E, 2007, Clashing epistemologies: Aspiring teachers’, practicing teachers’, and professors’ beliefs about knowledge and research in education. Teaching and teacher education, 23, 2, 123–135. [Google Scholar]
- Kantor H, 1991, Education, social reform, and the state: ESEA and federal education policy in the 1960s. American Journal of Education, 100, 1, 47–83. [Google Scholar]
- Locock L, Boaz A, 2004, Research, policy, and practice – worlds apart? Social Policy and Society, 3, 375–384. [Google Scholar]
- Lomas J, 2007, The in-between world of knowledge brokering. BMJ: British Medical Journal, 334, 7585, 129–132. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Long ACJ, Sanetti LMH, Collier-Meek MA, Gallucci J, Altschaefl M, & Kratochwill TR, 2016, An exploratory investigation of teachers’ intervention planning and perceived implementation barriers. Journal of School Psychology, 55, 1–26. [DOI] [PubMed] [Google Scholar]
- Malin JR, Brown C, Trubceac AS, 2018, Going for broke: A multiple-case study of brokerage in education. AERA Open, 4, 1–4. [Google Scholar]
- Neal JW, Mills KJ, McAlindon K, Neal ZP, & Lawlor JA, in press, Multiple audiences for encouraging research use: Uncovering a typology of educators. Educational Administration Quarterly. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Neal JW, Neal ZP, Kornbluh M, Mills KJ, & Lawlor JA, 2015, Brokering the research-practice gap: A typology. American Journal of Community Psychology, 56, 3/4, 422–435. doi: 10.1007/s10464-015-9745-8. [DOI] [PubMed] [Google Scholar]
- Neal JW, Neal ZP, Lawlor JA, Mills KJ, & McAlindon K, 2018, What Makes Research Useful for Public School Educators? Administration and Policy in Mental Health and Mental Health Services Research, 45, 3, 432–446. doi: 10.1007/s10488-017-0834-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Newman J, Cherney A, & Head BW, 2016, Do policymakers use academic research? Rexamining the “two communities” theory of research utilization. Public Administration Review, 76, 1, 24–32. [Google Scholar]
- No Child Left Behind (NCLB) Act of 2001, 20 U.S.C.A. § 6319 et seq. (2002).
- Nutley S, Walter I, & Davies HTO, 2003, From knowing to doing: A framework for understanding the evidence-into-practice agenda. Evaluation, 9, 125–148. doi: 10.1177/1356389003009002002 [DOI] [Google Scholar]
- Penuel WR, Allen A-R, Coburn CE, & Farrell C, 2015, Conceptualizing research–practice partnerships as joint work at boundaries, Journal of Education for Students at Risk, 20, 182–197. [Google Scholar]
- Penuel WR, Briggs DC, Davidson KL, Herlihy C, Sherer D, Hill HC, Farrell CC, & Allen A, 2017, How school district leaders access, perceive, and use research, AERA Open, 3, 1–17. [Google Scholar]
- Penuel WR, Farrell CC, Allen A-R, Toyama Y, Coburn CE, 2018, What research district leaders find useful. Educational Policy, 32, 540–568. [Google Scholar]
- Spillane JP, Reiser BJ, & Reimer, 2002, Policy implementation and cognition: Reframing and refocusing implementation research. Review of Educational Research, 72, 387–431. [Google Scholar]
- Reis-Jorge J, 2007, Teachers’ conceptions of teacher-research and self-perceptions as enquiring practitioners—A longitudinal case study. Teaching and Teacher Education, 23, 402–417. [Google Scholar]
- Weiss CH, 1979, The many meanings of research utilization. Public Administration Review, 39, 426–431. [Google Scholar]
- Weiss CH, & Bucuvalas MJ, 1980, Social science and decision-making. New York: Columbia University Press. [Google Scholar]
- Weiss CH, Murphy-Graham E, & Birkeland S, 2005, An alternate route to policy influence: How evaluations affect D.A.R.E. American Journal of Evaluation, 26, 12–30. [Google Scholar]
