Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Aug 26.
Published in final edited form as: Educ Policy (Los Altos Calif). 2019 Sep 16;36(2):247–281. doi: 10.1177/0895904819874756

The Competitive Effects of School Choice on Student Achievement: A Systematic Review

Huriya Jabbar 1, Carlton J Fong 2, Emily Germain 1, Dongmei Li 3, Joanna Sanchez 4, Wei-Ling Sun 1, Michelle Devall 1
PMCID: PMC11346812  NIHMSID: NIHMS1911579  PMID: 39188589

Abstract

School-choice policies are expected to generate healthy competition between schools, leading to improvements in school quality and better outcomes for students. However, the empirical literature testing this assumption yields mixed findings. This systematic review and meta-analysis tests this theory by synthesizing the empirical literature on the competitive effects of school choice on student achievement. Overall, we found small positive effects of competition on student achievement. We also found some evidence that the type of school-choice policy and student demographics moderated the effects of competition on student achievement. By examining whether school competition improves outcomes, our findings can inform decisions of state and local policymakers who have adopted or are considering adopting school-choice reforms.

Keywords: competition, school choice, meta-analysis, achievement, educational policy, economics of education


Access to high-quality schools is persistently unequal in the United States, in part due to historical patterns of segregation, housing discrimination, and the strong link between property values and school resources (Chetty, Hendren, Kline, Saez, & Turner, 2014; Frankenberg & Lee, 2003; Howell & Peterson, 2002; Reardon, 2011). School-choice policies, such as private school vouchers, inter- and intra-district open enrollment policies, charter schools, magnet schools, and homeschooling, often intend to remedy this problem by giving parents, particularly those who are low-income or racial minorities, more options. In addition to providing parents with more educational options, such policies may also generate competition between schools, leading to improvements in school quality and better outcomes for all students (Wohlstetter, Smith, & Farrell, 2013).

In theory, when families can choose between schools, school leaders experience market pressures, which arise from enrollment pressures from nearby schools. They may respond by improving the efficiency of their own schools and the effectiveness of their instruction. If schools do not respond, they risk losing funding—which accompanies each student—and, subsequently, school closure. Increased competition is expected to improve student and school outcomes for all students, even those whose families do not actively choose (Goldhaber & Eide, 2003)—rising tide that “lifts all boats” (Hoxby, 2003). Others, however, argue that competition for students has adverse impacts on already struggling public schools, if choice leads to clustering of the most marginalized students in traditional public schools or reduces educational revenues and creates fiscal constraints (Ni, 2009). Competition may also encourage schools to focus on superficial aspects, such as marketing, rather than improve curriculum and instruction (Loeb, Valant, & Kasman, 2011; Lubienski, 2007).

Despite strong theoretical claims about the power of competition to improve schools, empirical evidence is mixed (Belfield & Levin, 2002; Gill & Booker, 2015; Ni & Arsen, 2010). In the past two decades, a growing body of work has examined whether competition improves student outcomes (e.g., Bettinger, 2005; Bifulco & Ladd, 2006; Carr, 2011; Cordes, 2018; Egalite, 2016; Figlio & Karbownik, 2016; Hoxby, 2003; Imberman, 2011; Ni, 2009; Sass, 2006; Zimmer & Buddin, 2009). This research has found that the effects of competition on student achievement and graduation rates have been mixed overall, and these effects have generally been small whether they are positive or not (Belfield & Levin, 2002; Ni & Arsen, 2010). And there is a great deal of variation in study methods and measures of competition used to examine competitive effects (Belfield & Levin, 2002; Creed, 2015; Egalite, 2013; Linick, 2014). The uncertainty in the literature creates an urgent need for administrators and policymakers seeking clear evidence from which to make difficult policy choices.

To make sense of the evidence, we conducted a meta-analysis to explore the empirical literature on the competitive effects of school choice on student outcomes in the United States. Although two narrative reviews of the literature exist (Belfield & Levin, 2002; Ni & Arsen, 2010), one needs an update (more than 16 years old), and the other more recent review (8 years old) focuses only on charter school competition, not competition resulting from school vouchers or private schools. The first review examines 41 empirical studies, finding modest positive effects of competition in a majority of studies (Belfield & Levin, 2002). The second review examines 11 studies focusing on charter school competition, and the authors report that three studies find no effects of competition on traditional public schools, three find a negative effect of competition, and five find positive effects, which are usually very small (Ni & Arsen, 2010). Neither review includes a meta-analysis of the results, in part due to the significant variation in studies at the time these reviews were conducted. Because a handful of studies may contain various biases and divergent findings, aggregating multiple studies together may reduce biases and provides the most comprehensive kind of evidence. Moreover, across-study synthesis allows us to test the moderating role of numerous variables that may influence school competition effects on student achievement. From a policy perspective, meta-analysis can gather all relevant studies on a given topic for stakeholders and produce a conclusion about what the existing research says.

The overarching question we examine is the following: Do school-choice policies have the potential to benefit only the parents and students who engage in selecting schools or do they have positive effects via competition on all students, including those “left behind” in traditional public schools? Generating competition is a key aim of charter school and voucher policies, yet policymakers lack a clear understanding of how school-choice policies affect the educational system as a whole. Specifically, we ask the following questions: Does competition between schools affect student academic achievement? What is the magnitude and variability of these effects? We also examine key moderators of the competitive effects, which are important to understand not only how competitive effects operate, but also in which settings/contexts they are more impactful. We ask the following questions: Does the type of school-choice policy (e.g., charter, private, public school) moderate the effects of competition? Does the student population (e.g., percent eligible for free or reduced lunch, percent underrepresented minority) moderate the effects of competition? We also examine the relationship between the research design and results of the studies: Does the type of measure used to capture “competition,” or method used in estimating the effect of competition, moderate the effects of competition?

This meta-analysis examines the empirical evidence on competitive effects that result from charter school, school voucher, or other school-choice policies (e.g., public school choice or open enrollment), focusing on studies that include an explicit measure of competition, on a range of student outcomes. While meta-analyses are under-used in some areas of education policy, such as school choice, compared with other fields, such as health, curricular and programmatic intervention research, and educational psychology, there has been growing interest in research syntheses in the policy world (Ahn, Ames, & Myers, 2012; Hattie, Rogers, & Swaminathan, 2014). The obvious challenges are that they can make “apples to oranges” comparisons, so researchers must account for issues such as the type of study design, but we argue that syntheses are promising for guiding policymakers’ decisions, especially when accompanied by narrative review. Our review has implications for policymakers making decisions about whether to implement or expand school-choice policies by exploring a key assumption made by choice advocates. By describing how competitive effects vary by context or setting, policymakers can see how choice impact settings similar to their own, with more nuanced information upon which to base decisions.

Policy Context and Background on School Choice

School-choice policies, like charter schools and vouchers, have been enacted for a number of reasons, but a common rationale for choice is that it will encourage greater competition between schools. Indeed, the most common expectation of charter school policies, according to state laws, is that they will generate spillover effects to traditional public schools, through the mechanism of competition (see Wohlstetter et al., 2013, for a list of specific state laws). This is an important part of the theory of choice because otherwise school-choice policies could only theoretically benefit the small percentage of students who attend schools of choice. Some studies have found that increased competition is associated with small but statistically significant increases in student achievement (e.g., Bohte, 2004; Holmes, DeSimone, & Rupp, 2003; Sass, 2006). Competition could also decrease student achievement if parents do not choose schools based on academic performance or school quality, or if schools “cream-skim” the highest performing students from traditional public schools (Ni & Arsen, 2010). Some studies have found that competition has lowered student outcomes over time (e.g., Ni, 2009). School choice may also affect other outcomes, such as high school and college graduation rates (Foreman, 2017), school efficiency (Arsen & Ni, 2012), improved matching between families and school types (DeAngelis & Holmes Erickson, 2018), competition for teachers (Jackson, 2012), or segregation and stratification (Frankenberg & Lee, 2003; Hsieh & Urquiola, 2003; Swanson, 2017). In this review, however, we focus only on impacts of competition on student achievement.

The empirical research on whether and to what extent competition improves student achievement is thus mixed, perhaps due to the different contexts and methods studied. Furthermore, the theory of competition in schools is underconceptualized, with few developments since Friedman’s argument in the 1950s, with follow-up by Hoxby and others in the 1990s (Chubb & Moe, 1990; Friedman, 1955; Hoxby, 1994, 2000). Our work also aims to inform theoretical predictions based on policy design, setting/context, methods, and measurement.

The specific design of school-choice policies may influence the nature of competition. It is important to examine not just whether choice improves outcomes, but also when and under what conditions (Berends, 2015). Key policy design conditions include state-level contexts, which have different charter and voucher laws. State and local governments set the “rules of the game” to shape education markets (Bagley, 2006; Levin, 2012; Wong & Shen, 2002), specifying whether, for example, there are caps on the number or the type of charter schools (Betts, 2009; Bifulco & Ladd, 2006), how families and private school providers become eligible to participate in voucher programs, funding mechanisms of school choice, location and mission constraints, funding for facilities, which charter schools are authorized (Bulkley & Fisler, 2003; Carlson, Lavery, & Witte, 2012; Henry & Dixson, 2016; Zimmer, Gill, Attridge, & Obenauf, 2014), or how parents can apply to schools (Levin, 2012).

Local markets in education are thus significantly varied, potentially generating different competitive effects, yet these variations have not been systematically analyzed. Through the design of these policies, different levels of competition may occur. For example, states with no caps on charters may foster greater charter school market share, whereas other policies may restrict the extent of competition that may occur. However, a certain threshold of school-choice options or charter school density—a “tipping point” (Betts, 2009)—might be necessary before real competition exists. In empirical studies, scholars have used measures of market share (e.g., the percentage of private schools, or districts where at least 6% of students attend schools of choice) to capture the extent of competition facing traditional public schools.

Individual studies tend to focus on just one type of school-choice policy, focusing on a school voucher, open enrollment, or charter school policy, and its competitive effects on student outcomes. However, it is also important to compare how these policies create different climates of competition and, as a result, may have different impacts on student outcomes. Competition between public schools may operate differently than competition between public and private schools, depending on state accountability policies, transparency of information about school options, and the capacity and autonomy of schools to respond to enrollment pressures.

Competition resulting from school choice may also operate differently depending on the context and population of study. Policymakers and advocates have claimed that students who are low-income, who have been unable to exercise school choice through the purchase of a home, paying private school tuition, or staying home to homeschool their children, stand to benefit the most from choice and competition (Howell & Peterson, 2002). School-choice policies have been taken up in urban settings, disproportionately affecting African American students (Scott & Holme, 2016). In means-tested choice programs, those where only low-income students or those attending a low-performing school are eligible, these students may have greater access to school-choice programs, but research also finds lower rates of take-up and higher rates of attrition from voucher programs among these students in some cases (Chakrabarti, 2013; Cowen, Fleming, Witte, & Wolf, 2012; Fleming, Cowen, Witte, & Wolf, 2015; Howell, 2004; Howell & Peterson, 2002; Plucker, Muller, Hansen, Ravert, & Make, 2006), but not in others (Anderson & Wolf, 2017; Figlio, Hart, & Metzger, 2010; Hart, 2014; Kisa, Dyehouse, Park, Andrews-Larson, & Herrington, 2017; Metcalf, West, Legan, Paul, & Boone, 2003; Paul, Legan, & Metcalf, 2007). Important questions thus remain in the literature about which populations benefit from such policies, particularly the indirect effects of choice generated through competition.

Researchers have conceptualized competition in different ways, which may also account for some of the variation in findings. For example, some researchers use geographic measures of competition, such as proximity, or the distance of a traditional public school to a charter or private school, as a measure of competition. Others use market-based measures of competition, such as the amount of market share schools of choice hold, the density of charter schools in a district, or the enrollment loss of a school to other schools in the district. Furthermore, competition can be measured at different levels—the state, district, or school level, for example. Studies may use actual or realized entry of schools of choice as a measure of competition (e.g., declining enrollment and its associated loss of funds) or potential entry, such as the passage of a state charter law (Bohte, 2004; Maranto, Milliman, & Hess, 2010), whereby the threat of competition could elicit a response from schools.

Some of these differences in measures might capture different dimensions of competition (Creed, 2015), such as the difference between responses due to perceived threat versus actual threat resulting from competition. Geographic measures capture a kind of threat as well, whereas market-share measures can capture the actual losses of students from public schools to other sectors. If these measures capture different dimensions of competition and therefore might be empirically different, this might help to explain some of the divergent findings in the literature.

In addition to the particular measure of competition, studies vary in the level of competition they capture, or whether competition is experienced primarily at the school level or district level. Some previous studies have used school-level measures of competition, such as the distance to the nearest charter school (e.g., within 2.5 miles) as a proxy for competitive pressure (Bettinger, 2005; Bifulco & Ladd, 2006; Booker et al., 2008; Imberman, 2007; Zimmer & Buddin, 2009), the idea being that traditional public schools that are close to a charter school might feel greater pressure than those that are further away from one. Other studies have used variations of this by looking at the number of charter schools within a given radius, rather than just determining whether there was at least one charter school, and another study looked at the percentage of charter school enrollment in a given radius, rather than the number of charter schools. Other studies test the relationship between student enrollment losses from traditional public schools to charter schools, and the subsequent achievement gains (Betts, 2009).

Other market-share measures of competition are measured at the district level, such as whether the district had at least one charter school or the percentage of students enrolled in charter schools in the district (Booker et al., 2008; Carr, 2007, 2011; Hoxby, 2003; Ni, 2009; Sass, 2006; Zimmer & Buddin, 2009). Some argue that the district level is appropriate for measuring competition because, in many districts, important decisions regarding changes and improvements, such as per-pupil spending, textbooks, and curriculum standards, are made at the district level, not at the school level. Therefore, although economic theory suggests that competition will be felt at the school level—the unit of analysis being the “firm”—the reality of most districts suggests that this is often not the case.

Competition may affect some outcomes and not others, or it may have effects that are not measured. For publicly available measures such as student achievement (e.g., math and reading scores), competition may have a greater impact, given that these scores signal school quality to parents and are also important for accountability policies and school reauthorization or closure. Theoretically, however, schools facing competitive pressures should become more responsive to the desires of parents and local communities. Parents and taxpayers may value other outcomes beyond raising test scores, such as student motivation or behavior (Tuttle et al., 2015), higher wages (Dobbie & Fryer, 2015), school safety (Hamlin, 2017), student-body diversity (Frankenberg & Lee, 2003), or school values (Betts, 2009). Researchers have only recently started to explore these other outcomes of school-choice policies. Although we focus on student test-score outcomes for our meta-analysis, we acknowledge these are simply one measure of student outcomes of interest. Increased competition may also have other impacts that we do not analyze here, including effects on segregation or stratification (Hsieh & Urquiola, 2003; Swanson, 2017), school climate and safety (Dynarski, Rui, Webber, & Gutmann, 2018; Howell & Peterson, 2002; Shakeel & DeAngelis, 2018; Witte, Wolf, Cowen, Fleming, & Lucas-McLean, 2008; Wolf et al., 2013), crime (DeAngelis & Wolf, 2016; Deming, 2011; Dills & HernándezJulián, 2011; Dobbie & Fryer, 2015), and civic engagement (DeAngelis, 2017; Wolf, 2007).

Studies examining the effects of competition face considerable threats to validity. Unlike studies of school-choice effects, such as those examining the effects of attending a charter school, for example, which have, in some cases, used lottery-based designs or random assignment, the “gold standard” for identifying causal effects, the best research on competitive effects has used “quasi-experimental designs,” in which units, or schools, are not assigned to conditions randomly (Shadish, Cook, & Campbell, 2001). Isolating the impact of competition, from other policies and demographic changes, is challenging due to the nonrandom location of schools of choice, and the nonrandom exit and sorting of students across schools (Betts, 2009). Recent studies have used designs that aim to address these issues. Some have used instrumental variables to address the problem of endogenous location of schools of choice by incorporating a variable that predicts, for example, charter school location, and more recent studies increasingly use longitudinal student-level data with fixed effects to address the nonrandom, and often unobservable, sorting of students between traditional schools and schools of choice. Studies using research designs that reduce the risk of bias (i.e., instrumental variables, student fixed effects) may yield different findings.

In light of the various factors that may influence the ways in which competition effects student achievement, we test whether studies using “quasi-experimental designs” diverge from those that do not. We also explore whether the particular measure used (geographic, student enrollment loss) along with other theoretically driven factors explain variation in study results.

Method

The focus of this article is the exposure to competition resulting from school choice. To assess competitive effects, we conducted a meta-analysis largely following prescriptions by Cooper, Hedges, and Valentine (2009). In the following sections, we describe the procedures used to review and meta-analyze this body of literature.

Inclusion Criteria

For a study to be included in the meta-analysis, a study must have met the following criteria. First, studies must include a specific measure of competition, whether the competition resulted from a specific policy change, private schools, charter schools, or traditional public schools. We excluded these studies examining competition between students within schools, as our focus is on competition resulting from a specific policy change—school choice—rather than other forms of competition that may be present in schools generally. Second, the population of interest includes students in any K-12 school, public or private, that may be subject to competition from any type of policy (i.e., charter schools, vouchers, presence of private schools). We excluded studies examining competition in higher education.

Third, the study must have included a student achievement outcome, operationalized as standardized examination of any academic outcome. To achieve consistencies among outcome types, we excluded studies when outcomes were presented as percentile or percentage proficient. Although there are a number of other student outcomes that determine a student’s educational trajectory and life chance (e.g., graduation rates) and student behaviors (e.g., attendance), we did not include them due to a small number of studies that reported such outcomes.

Fourth, we did not restrict our search to any time period, but most studies were from the 1980s or 1990s onward when most school voucher or charter school policies were implemented or expanded. We wanted to capture studies published after 2001, since the last systematic review was published (Belfield & Levin, 2002), but chose to include studies in all years for the analysis.

We first included studies from any country that had an abstract published in English. Although we anticipated that most research would have been conducted in the United States, several other countries have experimented with school choice. Although the effects of competition in those contexts could inform policies in the United States, or in other countries, because these contexts are substantially different, we did not code studies that used samples outside of the United States.

We included all empirical quantitative studies that had an explicit measure of competition, including peer-reviewed articles, policy reports, dissertations, and so on. (We excluded studies solely using simulation techniques.) We included a broad range of designs to be as comprehensive as possible in our review, including correlational studies, studies using multiple regression (ordinary least squares [OLS], hierarchical linear modeling), quasi-experimental designs (QEDs; that is, instrumental variables, regression discontinuity, fixed effects, difference in differences), and others. (While we would include them, we did not find lottery-based studies or studies using random assignment. It is virtually impossible to randomly assign competitive pressures to individual schools.) Given the various designs and models used in the literature base, we used appropriate conversion formulae and aggregated studies according to design when needed, and we will discuss these procedures in subsequent sections. We did not include qualitative evidence.

Search Strategy

In fall 2015, we conducted an exhaustive literature search using a broad array of strategies of electronic databases with variants of the search term “school competition.” To locate all potentially relevant articles, we first searched the following electronic databases: EBSCOHost, Academic Search Complete, EconLit, ERIC, PsycINFO, and ProQuest Dissertations and Theses. In each of these databases, we used the following search term string: “competi* AND (school* OR education)”. The truncation technique yielded any variant or form of the search terms. Second, we consulted the reference lists of key reviews of the literature (Belfield & Levin, 2002; Egalite, 2013; Linick, 2014; Ni & Arsen, 2010), and the reference lists of all studies we coded to identify additional relevant studies.

Once all citations had been retrieved, abstracts for these studies (k = 8,828) were judged for relevance, resulting in a pool of studies that would possibly meet the inclusion criteria. The full texts of these potentially codable studies (k = 431) were reviewed and evaluated with the inclusion criteria. Ancestry searches were conducted by reviewing the reference section of all relevant studies retained for coding as well as review articles, yielding 93 additional titles and abstracts for screening. (Two of these were identified by an anonymous reviewer.) See Figure 1 for the flow of our search retrieval.

Figure 1.

Figure 1.

Search Retrieval Flow.

For studies that were eligible for inclusion, but were missing key information, we contacted the authors to request missing data if the paper was published within the past 5 years. We made six such requests and received the necessary data from two study authors. Those studies were included, and we had to omit at least some results from the others.1

Information Retrieved From Studies

Numerous characteristics were coded directly from study’s research report. In some instances, some inference was necessary such as using preestablished definitions to code ambiguous characteristics. The coded characteristics encompassed seven broad distinctions among studies: (a) the research report, (b) the research design, (c) the school competition variable, (d) the outcome, (e) the sample, (f) the measure of student achievement, and (g) the estimate of the relationship between competition and student achievement.

The first category of codes was characteristics about the research report, namely, the type of publication and the research design. We were interested in the type of publication (peer-reviewed journal article, report, working paper, or dissertation/thesis), and distinguished studies as a peer-reviewed publication or non-peer-reviewed. This code allows us to assess the potential impact of publication bias. The second group of codes designated whether the study employed a QED or a non-QED. We categorized QED studies as using methods such as instrumental variables, fixed effects, and difference-in-difference. The majority of non-QED studies used OLS regression, and a few reported correlations or group comparisons.

The third group of codes included characteristics of school competition. We coded for four major types of competition measures used in the studies, including passage of policy/law, where the measure of competition was simply the passage of a voucher policy or charter school law; geographic proximity or distance, where competition was measured in terms of the distance to a school of choice, for example; and, finally, market share, where competition was represented by the number or percentage of students in schools of choice, or by a reduction in market share, such as loss of student enrollment to schools of choice. For studies combining measures of school competition, we created a separate category. We also differentiated between studies where competition was measured at the school level, district/county level, or at the state level (i.e., passage of a charter law). We also coded for the type of competition, such as whether competition was introduced through open enrollment in traditional public schools, private schools (i.e., voucher/choice programs or the presence of nearby private schools), charter school legislation, or some combination of these.

The fourth set of codes captured dimensions of the sample. Specifically, we coded for the region or state of the sample; the percentage of students who qualified for free or reduced lunch; the geographic location of the sample (e.g., urban, rural, suburban, or mixed); the grade level(s) of the students in the sample; and students’ race/ethnicities and gender. Because of limited variability among these characteristics across studies and the low frequency of studies reporting student demographics, we limited our analyses to the percentage of the sample reported as free and reduced lunch (FRL) eligible and percentage of the sample reported as non-White (i.e., minority student composition).

The fifth set of codes focused on dimensions of the outcome variable. Since we focused on academic outcomes, we coded for whether the effect size captured standardized achievement test scores, grades, and other cognitive assessments of academic performance. Another distinction while coding the outcome was whether there were high stakes value placed on the assessment or not (e.g., a statewide standardized assessment vs. a college entrance exam). We also coded the specific domain measured (e.g., literacy, math). When the outcome combined literacy and math outcomes, we coded them as a general achievement outcome.

One important feature for the outcome was the unit of analysis, such as whether it was at the student, school, or district/county level. Because this often influenced the kinds of sample sizes used in the study, we decided to divide our pool of included studies based on the unit of analysis and conduct three separate meta-analytic investigations. In other words, we divided our pool of included studies into three groups based on effects with achievement outcomes reported at the district/county level (“district-level”), school level, or student level.

There were several studies that used the same underlying data (e.g., administrative data from the state of Florida, with the same or overlapping years). We followed the following procedure to address this issue. When two or more studies were authored by the same person (e.g., a dissertation and a peer-reviewed article), we prioritized the more recent or peer-reviewed articles over dissertations and working paper versions of the same study. There was one exception, where a working paper had complete results and the published version (not peer-reviewed) did not. In this case, we used the working paper (working paper: Figlio & Hart, 2010; article: Figlio & Hart, 2011). For studies using the same underlying data but with different authors, we used the following approach. We first determined whether they used the same unit of analysis (e.g., district, school, or student), as we analyzed these samples separately. If two studies used the same unit of analysis, same data, and had no more than one overlapping year (e.g., one study captured 2000–2005, and the other captured 2005–2010), we retained both studies individually. If two studies used the same unit of analysis, same data, and had more than one overlapping year, we averaged the effects across studies for our main analyses, as other researchers have done (Pham, Nguyen, & Springer, 2017).2 For our moderator analyses, we retained all studies individually to maximize the data available through a shifting unit of analysis approach (Cooper, 1998).

Coder Reliability

All reports were coded independently by trained coders that included the first, third, fourth, fifth, and sixth authors. The coders were extensively trained on each code using the previously mentioned coding frame. Each study was double coded. As a reliability check, all pairs of codes for each study were compared for agreement between two coders. If there were any disagreements, codes from a third coder were consulted and used for resolution. Reliability was captured as a percentage of agreement. Discrepancies occurred on 6.3% of the codes, yielding a total reliability of 93.7%, which is acceptable.

Effect Size Calculation

One of the challenges of meta-analytic research is to find comparable metrics to combine effects from multiple studies that employ a variety of methods and report a varying degree of descriptive statistics. Therefore, we used a variety of computational and conversion formulae to transform effect sizes into a common metric. Because the majority of studies captured the continuous relationship between level of school competition and student achievement in regression models, we opted to use partial correlation coefficients (Aloe, 2014, 2015; Aloe & Thompson, 2013). Partial correlations capture the linear relationship between two continuous variables while controlling for covariate effects. When studies reported regression coefficients and necessary data such as a t-statistic value or a standard error, we were able to derive partial correlation coefficients (rp) and their variance (vi(rp)) using Aloe’s (2014, 2015) suggestions:

rp1=tftf2+(np1).
vi(rp)=(1rp2)2np1.

In these formulae, t is the t test of the regression coefficient, p is the number of predictors, n is the sample size. If studies did not provide information to derive standardized coefficients, they were excluded from the analyses.

Data Integration

Before conducting any meta-analytic procedures, we counted the number of positive, zero, and negative effects. Next, average effect sizes were aggregated together using an intercept-only random effects meta-regression model. We used a weighting procedure to calculate average effect sizes across independent samples. We first multiplied each effect size by the inverse of its variance and then divided the sum of these products by the sum of their inverses. This procedure gives more weight to samples of larger size, which is generally preferred (Hedges & Olkin, 1985), as larger samples give more precise population estimates. We also calculated 95% confidence intervals (CIs) for weighted average effect sizes.

Identifying independent hypothesis tests.

When calculating effect sizes, determining whether an effect size is independent (participants in one sample providing the observations do not overlap with another sample) can be problematic when there are multiple effect sizes from a single sample (i.e., multiple levels of potential moderators). Therefore, we used a multivariate model and a sandwich estimator. We fitted multivariate models to account for the multiple correlated effect sizes. Assuming a correlation of .80 between outcome measures, we also employed robust variance estimation (RVE; Hedges, Tipton, & Johnson, 2010), which can further guard against threats of misspecification especially for standard errors and hypothesis testing. RVE uses observed variation in effect sizes to estimate the standard error rather than assuming variance and standard errors from a model. This approach produces more valid standard errors, point estimates, CIs, and significance tests when effect sizes are nonindependent. Without such an approach to correct effect size dependencies, variance estimates can be artificially reduced and Type I error can be inflated. We also used small-sample adjustments for t tests for hypothesis testing (Tipton, 2014).

Moderator analysis.

Effect sizes may vary even if they estimate the same underlying population value. If effect sizes significantly vary from each other and produce heterogeneity in the distributions of effects, moderators can be assessed to systematically explain such variation. Thus, meta-regression was employed to assess the influence of each moderator separately as a covariate. Once again, a multivariate model was selected, and RVE was applied. A weighted least squares approach was used to estimate the regression coefficients using weights based on a random effects model to approximate inverse variance. We also adjusted for small-sample t tests to determine whether there was a relationship between focal variables and effect sizes in the population (Tipton & Pustejovsky, 2015). Extensive variability across a small number of studies and low statistical power precluded the use of multivariate meta-regression. Both overall and moderator analyses were conducted using R packages metafor and clubsandwich.

Models of error.

In meta-analytic research, there are two types of error: fixed effects and random effects. In a fixed effects model of error, we assumed that the only source of error explaining why the effect size varies from one study to the next is sampling error or differences among participants across studies. In a random effects model of error, a study-level variance component is assumed to be an additional source of random variation. Due to heterogeneity among studies, we report our meta-analytic results using random effects only. Variation among study designs, characteristics, and variables associated with educational research warrants the use of random effects models, the recommended approach by the vast majority of statisticians (i.e., Hedges, 1983; Hedges & Vevea, 1998). In addition, given the more conservative approach with random models of error and the exploratory nature of our analyses, we opted to report findings with p values less than .10.

Publication Bias

To assess bias in meta-analyses, researchers have used a variety of methods, each with their own limitations. An additional layer of complexity in our meta-analytic data set is the multivariate nature of the nested effect sizes within each study. Because traditional methods to assess publication bias rely on a single summary effect for each study, we are precluded from using these techniques because it is difficult to determine what that summary effect should be in multivariate meta-analysis. One technique that accounts for nested effects is a modified version of Egger’s regression test, which assesses the relationship between a study’s variance (i.e., precision) and the effect size magnitude. To conduct this test, we conducted a meta-regression with effect size variance as a moderator.

Another related test is to assess the impact a single study may have on overall meta-analytic averages. Methods such as one-study removed or leave-one-out analyses also rely on univariate meta-analyses without any nested effect sizes. In light of this limitation, for our multivariate meta-analysis, we calculated a Cook’s distance (Cook, 1979) for each nested effect to assess whether particular effect sizes significantly altered the overall average.

Results

Our literature search yielded almost 9,000 studies. From an initial screening of the titles and abstracts, we included more than 400 for full-text retrieval to further check them against the inclusion criteria. After screening full-text documents, we identified a total of 102 eligible for inclusion and published from 1992 to 2015 During coding, we had to exclude 10 of these studies due to missing sample sizes, or upon discovery that they did not actually fit eligibility criteria (e.g., did not measure student achievement), resulting in 92 studies eligible for inclusion.

Overall Effects of School Competition on Student Achievement

In our analysis as discussed in our “Method” section, we split our analyses for three sets of studies depending on their unit of analysis: (a) studies that use districts or counties as units of analyses; (b) studies that use schools as units of analyses; and (c) studies that use students as units of analyses. We were also interested in the impact of research design. Although we test the influence of design type as a moderator, we also wanted to discuss our main effects and moderator tests using just the QED studies. Due to the small number of QED studies using districts or counties as the units of analyses, we only ran our sensitivity test for studies using schools or students as units of analysis. In sum, this was a robustness check, and in a way, a “best-evidence synthesis.”

District level.

We first examined studies where the outcome unit of analysis was at the district level (k = 24 unique samples). There were a total of 139 effect sizes extracted with 68 in the positive direction and 71 in the negative direction (no zero effects). The average weighted partial correlation of school competition and student achievement was –.002 (95% CI = [−.07, .06]), under random effects (see Table 1). Therefore, the hypothesis that the effect of school competition on student achievement is equal to zero was retained.

Table 1.

Overall Main Effects for Full Sample of Studies.

k rp 95% CI σ12 σ22 Total effects Positive effects Negative effects Zero effects
District/county level 24 −.002 [−.07, .06] .01 .02 139 68 71 0
School level 33 .06 [−.01, .14] .04 .00 686 378 300 8
Student level 43 .001* [.001, .002] .00 .00 1,502 990 490 22

Note. tau-squared represents the degree to which effects differ due to reasons other than sampling error at both the effect size level (σ12) and the sample level (σ22). rp=partial correlation; CI = confidence interval.

*

p < .01.

p < .10.

School level.

Second, for studies with schools as the unit of analysis (k = 33), we extracted a total of 686 effect sizes. There were 378 positive effects, eight zero effects, and 300 negative effects. When meta-analyzing the effects together, we found a partial correlation of .06 (95% CI = [−.01, .14]). This effect was on the borderline of significance (p < .10), but the CI includes zero. Due to the exploratory nature of our synthesis, we argue that this is a small, but notable positive influence of school competition. When only examining the QED studies (k = 24), the effect was of the same magnitude, .06 (95% CI = [−.02, .014]), but this was still not significantly different from zero (p = .16).

Student level.

Finally, for studies with students as the unit of analysis (k = 43), there were a total of 1,502 effects with 990 in the positive direction, 490 in the negative direction, and 22 effects that were zero. The weighted average effect was .001 (95% CI = [.001, .002]). The weighted average partial correlation was significantly larger than zero (p < .01). From the subset of QED studies (k = 40), the effect remained at .001 (95% CI = [.000, .002]).

Summary of main effects.

Overall, the main effects on school competition on student achievement were small. For studies with schools as units of analyses, the average effects were positive and on the borderline of significance. For studies with students as units of analysis, the effects were significant but only slightly positive. At the district level, there were no significant effects. These effects were consistent with sensitivity analyses that excluded non-QED studies. Moreover, there was variation in the valence of effects as well as some heterogeneity as indicated by σ2.

Moderator Analyses

Next, we sought to examine moderating variables that may explain some of this variation (see Table 2). By creating groups of studies (or samples within studies) and dummy coding them along the varying moderator characteristics, we assessed the influence of a number of factors in a series of meta-regression models. Specifically, we examined the following moderators: type of competition (traditional public school, private school choice/presence, charter school, mixed); publication status (peer-reviewed or not); measure of competition (geographic proximity, market share, passage of law or policy, mixed); level at which competition is measured (county, district, school); outcome domain (math, literacy, general); achievement outcome value (high stakes or other); research design (quasi-experimental or non-quasi-experimental). One continuous moderator was the percentage of FRL and non-White students in the study samples. In the subsequent sections, we will discuss the results of each of these tests, mainly highlighting the significant moderating effects. For these analyses, unlike the analysis of overall main effects, we included all studies, even those using the same or similar data, as our interest was to explore the variation across studies. We also present our sensitivity analysis of just using the QED studies, and report them when they alter the original analysis.

Table 2.

Moderator Analyses for Full Sample of Studies.

District/county level
School level
Student level
k Coefficient SE p k Coefficient SE p k Coefficient SE p
Publication status
 Non-peer-reviewed
2 16 13
 Peer-reviewed 24 .10 .05 .10 27 .003 .06 .96 32 −.001 .001 .37
Type of competition
 Charter
4 18 17
 Private—voucher/choice program 1 −.49 .25 .12 17 .22* .08 .01 17 −.001 .001 .40
 Private—nearby private schools 11 −.19 .17 .42 2 −.06 .06 .37 4 .001 .003 .89
 Traditional public 3 −.17 .18 .44 6 −.03 .04 .57 7 .001 .003 .79
 Mixed 2 −.20 .17 .34 1 .27** .01 <.001 7 .000 .001 .99
Measure of competition
 Geographic proximity
4 16 19
 Market share 17 .06 .05 .41 21 .01 .01 .18 25 −.000 .001 .75
 Policy/law 1 .37* .03 .01 9 .12 .07 .12 6 .001 .002 .64
 Mixed 1 .14 .05 .09 9 .02 .01 .28 4 .000 .000 .65
Level of competition
 County
12 2 4
 District 12 −.004 .02 .96 9 .03 .04 .62 8 .000 .02 .98
 School 2 .03 .03 .38 36 .03 .04 .58 36 −.004 .01 .82
Outcome domain
 General
17 12 6
 Math 18 .03 .03 .34 23 .03 .06 .64 33 −.002 .002 .49
 Literacy 7 .06 .05 .29 29 .03 .06 .60 37 −.001 .002 .73
Outcome value
 Other
5 3 7
 High-stakes 19 −.01 .06 .81 40 .11 .05 .14 38 −.001 .03 .88
Research design
 Non-QED
24 20 15
 QED 3 −.02 .02 .58 29 −.00 .01 .99 35 −.01 .01 .73

Note. The top row in each category was the reference group for dummy-coded moderator variables. QED = quasi-experimental design.

*

p < .05.

**

p < .01.

***

p < .001.

Publication status.

Under the assumption that peer-reviewed journal articles present larger and significant effects than studies that are unpublished (i.e., file-drawer problem), we tested whether this was the case. There was no significant moderation of publication status with unpublished studies as the reference group. This suggests that larger effects were not found for peer-reviewed studies compared with studies that were not peer-reviewed.

Type of competition.

Regarding the type of competition, we grouped studies whether they examined competition from charter schools, private school choice/voucher, nearby private schools/presence, traditional public schools, or a mixed set of school types. We found significant moderation for competition type for studies using schools as units of analysis using charter schools as the reference group. Compared with charter schools, private school choice (k = 17, β = .22, p < .05) and a mixed set of competition types (k = 1, β = .27, p < .001) had larger associations between competition and student achievement overall. When only looking at QED studies, competition from private school choice (β = .20, p < .05) had larger associations between competition and achievement compared with charter school competition. There were no QED studies with mixed competition effects.

Measure of competition.

One of the other variables of interest was how school competition was operationalized. Specifically, whether studies examined school competition as a result of geographic proximity, market share, passage of a policy or law, or some combination of these measures. Our analyses revealed that there were significant differences among types of school competition for studies using districts as units of analysis. Studies with policy-enacted school competition had the largest effects. However, for studies using districts as the unit of analysis, there was only one study examining policy-enacted school competition with a large effect, which makes it difficult to interpret the moderating effect. There was no moderation of competition measure for studies using students or schools as units of analyses. In addition, using QED studies only yielded the same pattern of effects.

Level of competition measure.

There was no evidence that the level of the measure of competition, whether it was measured at the county, district, or school level, moderated competitive effects on student achievement. This effect was retained in the sensitivity analysis when limiting to only QED studies.

Student achievement outcome.

We were interested in two aspects of the outcome that may moderate school competitions’ effects on student achievement: the academic domain of the outcome and whether it was a high stakes assessment or not. For both aspects, there was no evidence of significant moderation regardless of unit of analysis. This was true also during sensitivity analyses with only QED studies as well.

Research design.

To account for the variety of research designs and statistical techniques used, we made a broad distinction among effects whether they were derived from studies that implemented a quasi-experimental technique or not. Comparing effects from studies that used a QED or not, we found no evidence of moderation from different research designs.

Sample characteristics.

To assess the variables of sample percentage of FRL and sample percentage of minority students, we only conducted meta-regression equations for studies using schools or students as units of analyses because studies using districts as units of analyses that reported sample characteristics were too few to meaningfully conduct a meta-regression (k < 10). Overall, percentage of FRL students did not moderate effects of school competition (school unit of analysis: β = .02, p = .47; student unit of analysis: β = .01, p = .48). However, percentage of minority students was a significant moderator minority percentage moderation for studies using students as units of analysis (β = .04, p < .001). There also was a borderline significant moderator for studies using schools as units of analysis (β = .03, p = .07). In sum, proxy measures of demographic characteristics showed mixed evidence of moderation; although limited evidence supports the moderating role of percentage of FRL, there was some evidence supporting that school competition may have a larger influence on student achievement for samples with a greater percentage of minority students.

Publication Bias and Sensitivity Analysis

To assess publication bias, we conducted a modified version of Egger’s regression test, using each effect size variance as a predictor in a meta-regression moderator test. In all three sets of analyses for district, student, and student units of analysis, there was no evidence that study variance impacted overall effects. This indicates that studies of varying precision did not significantly alter the effect of competition on student achievement. We also tested whether particular effects were disproportionately affecting the overall analyses. Upon examination of Cook’s distance values, we did not find any evidence of effects having a significant influence.

Discussion

Consistent with prior reviews (e.g., Belfield & Levin, 2002; Ni & Arsen, 2010), we find that, in general, the effects of school competition on achievement are very small and, on average, the effects are positive. However, the effects vary across studies and across different dimensions to some degree. These effects are significantly different from zero for studies that used student-level units of analysis, suggesting that more aggregate units may not be able to capture the effects of competition. As research has moved toward measuring outcomes at the student level, this may increase the power to detect competitive effects.

In general, competition resulting from school-choice policies does have a small positive effect on student achievement. The lack of an overall negative impact on student outcomes might ease critics’ concerns that competition will hurt those students “left behind” due to school-choice policies. School choice can concentrate historically disadvantaged students, those whose parents may not be able to participate in school-choice programs or transport their children to schools further away. Researchers and policymakers have been concerned that this concentration may further disadvantage these schools and students, who may be unable to compete on an even playing field. Our results suggest that research to date, overall, does not support this hypothesis, although specific studies have found negative effects of competition in some contexts.

It is important to note that while positive, the effect of competition is very small, especially when observing average weighted partial correlations, which control for the wide range of covariates in the majority of studies (Doucouliagos, 2011). Therefore, it is important not to overstate the impact of competition, that “a rising tide lifts all boats,” given that the effects are too small to have a major impact on educational quality and inequality on their own. Furthermore, we only examined the effects on student achievement; there may be other impacts of competition that are important to measure, such as whether schools are more efficient (Arsen & Ni, 2012), or whether competition increases segregation (Frankenberg & Lee, 2003; Hsieh & Urquiola, 2003; Swanson, 2017), which may be important to policymakers and citizens regardless of the impact on student achievement.

We find significant variation in the effects of competition on student achievement. It is important for policymakers to understand the differential impacts of competition under different types of school-choice policies. We found some evidence, in studies using school-level measures, that the type of school-choice policy moderated the effect of competition, with significant differences between competition resulting from charter schools and private schools. For school-level analyses, there was some evidence to suggest that competition from private school choice through school vouchers had a greater effect on student achievement than charter school competition. Although the results were not consistent across all analyses, charter school competition seemed to have a smaller or no different impact on student achievement than other choice policies. This warrants further investigation given the rapid expansion of charter schools in the past several decades, in part based upon the claim that competition will improve traditional public schools (Wohlstetter et al., 2013).

Our results suggest that competition from private school choice (through voucher policies) can have significant positive impacts on overall student achievement, often greater than that from charter school competition. However, there are concerns about accountability in voucher programs. Although prior studies on school vouchers have found positive effects (Cowen, 2008; Greene, 2001; Greene, Peterson, & Du, 1999; Howell, Wolf, Campbell, & Peterson, 2002; Rouse, 1998; Wolf et al., 2013), some recent studies found null or negative impacts of vouchers for students who attended private schools (Abdulkdiroglu, Pathak, & Walters, 2015; Mills & Wolf, 2017; Waddington & Berends, 2017). This raises concerns about the use of vouchers to lift student achievement, even if they have small positive impacts via competition on traditional public schools.

It is also important for meta-analytic work on controversial topics to consider the politics of research production and dissemination. Although we did not find any differences between peer-reviewed and non-peer-reviewed studies in the moderator effects, future syntheses of controversial topics such as school choice should consider whether or how to include studies that come from think tanks ideologically aligned with school-choice policy. Some of these unpublished reports only reported statistically significant results, noting that they were excluding null results. The fact that we found no evidence of publication bias runs counter to typical hypotheses in research syntheses, as it is often the case that it is harder to publish null or small effects given that they are less “interesting” to a general audience, and publishers may desire to publish studies that have significant positive effects. Indeed, Belfield and Levin (2002) found in their prior study evidence of publication bias that may have overstated the benefits of competition. However, this was not the case in our investigation.

Previous reviews have raised concerns that the inconsistencies in findings regarding competitive effects may in part be due to a lack of agreement about how to define, measure, and operationalize competition (Linick, 2014; Ni & Arsen, 2010). We find few differences based on the competitive measure used. District-level studies using a policy/law change as the measure of competition had larger competitive effects than studies using a more fine-grained measure of competition (e.g., market share, or geographic density/proximity, or a combination of measures), but these results were not consistent. These results could suggest that simply the threat or change in policy has a significant positive effect on student achievement, spurring schools to act in response, or that this is a “blunter” measure. In any case, it is important to think about which measures are used. Researchers do not agree on what measure of competition is “best” (Creed, 2015; Linick, 2014), and our findings suggest that different measures of competition do not yield statistically significant different results (Creed, 2015). However, given that prior research has found different results when using different measures (Creed, 2015), when data are available, researchers should use multiple measures of competition and report the extent to which findings are consistent across the different measures used.

We anticipated that different research designs might have different results. However, we found that studies using QEDs were consistent with findings from non-QED studies. In our sensitivity analysis of only QED studies, most effects were consistent with the original analysis. Furthermore, our grouping of studies into these general categories (QED or not) may not capture the different ways in which different types of QED designs (e.g., instrumental variables, fixed effects) might affect student achievement.

A key argument for school choice made by policymakers and advocates is that school choice has the potential to benefit historically disadvantaged students in particular, due to its ability to break up strong links between poor neighborhoods and underfunded local schools. We tested this relationship, and we found that our measure of poverty, the percentage of students eligible for FRL, did not moderate the effects of competition. However, we found some evidence that the sample percentage of minority students was a significant moderator, suggesting that school competition may have a larger influence on student achievement for minority students. This is consistent with advocates’ claims that choice may improve educational opportunities for marginalized students in particular, not just for those who choose, but also for those “left behind” in traditional public schools. Given that we had to exclude several studies that did not report overall descriptive statistics for race or socioeconomic status, we urge researchers to include these data in their reports and publications, or make them available online.

There are several limitations of this study. The first limitation of this synthesis is that it should not be interpreted as supporting causal relationships (Cooper, 1998). A synthesis can only establish an association between a moderator variable and the outcome, but not a causal connection. Therefore, when significant associations are found when groups of effect sizes are compared within a research synthesis, results should be interpreted and used to direct future research of these factors in a controlled design to appropriately appraise causal impact. Given the variation in competitive effects found, as the body of research grows, future syntheses might test additional moderators, including specific policy designs (e.g., charter caps, means-tested vouchers), as well as other outcomes, such as high school graduation rates, attendance, segregation, allocation of resources, and teacher quality. Moreover, many of the moderator tests in our study relied on a small number of studies and should be interpreted as exploratory.

As meta-analysts seek to synthesize research in education policy arenas, they may need to consider that many researchers use the same large, statewide administrative data sets as their underlying data for analysis. As these data sets become increasingly available to researchers, we will likely see even more use of these—and more overlap in study data sources. Although these trends are overall positive for researchers, they do make synthesizing the research more challenging, and meta-analysts will need to think through these issues. Despite these limitations and areas for future study, our analysis extends prior reviews of competitive effects by drawing on more recent empirical studies and begins to synthesize the results using meta-analytic methods.

Overall, school competition has small positive impacts on student achievement, but there is significant variation in these effects, across different contexts and dimensions. The findings suggest a need to examine the variations in market contexts, especially the type of school-choice policy and the population served, and their differential effects on outcomes of school-choice policies. This heterogeneity should also encourage advocates of school choice and policymakers to use caution when implementing such policies, as it is unclear whether a particular choice model will have the expected competitive effects. School choice does not inherently generate positive outcomes or massive inequities; the design of these policies matter, particularly for their impacts on student achievement and ability to reduce inequality.

Supplementary Material

Online appendices

Acknowledgments

The authors are grateful to Ariel Aloe, David Arsen, and Lauren Schudde for feedback on earlier drafts and presentations.

Funding

The author(s) received no financial support for the research, authorship, and/or publication of this article.

Biographies

Huriya Jabbar, Ph.D., is an assistant professor in the Educational Policy and Planning Program in the Department of Educational Leadership and Policy in the College of Education at the University of Texas at Austin. Her research examines the social and political dimensions of school choice and other market-based reforms across K-12 and higher education contexts.

Carlton J. Fong, PhD, is an assistant professor of Developmental Education in the Department of Curriculum and Instruction at Texas State University. He examines the psychological and instructional factors that impact post secondary student success and uses meta-analytic methods.

Emily Germain is a doctoral candidate in the Education Policy and Planning program in the Department of Educational Leadership and Policy at The University of Texas at Austin. Her research examines markets in education; geography, equity, and opportunity; and sustainable development.

Dongmei Li, Ph.D. is a postdoctoral fellow in the Houston Education Research Consortium in the Department of Sociology at Rice University. Her research examines education access and equity issues in the U.S. and China, educational accountability, reform and their impacts.

Joanna Sanchez, PhD, is a Postdoctoral Research Associate in the Department of Curriculum and Instruction at Howard University. Her research interests include Latino/a parental engagement, GIS and spatial analysis in education policy, school/ family/community partnerships, and STEM education.

Wei-Ling Sun is a doctoral candidate in the Education Policy and Planning program in the Department of Educational Leadership and Policy at The University of Texas at Austin. Her research examines the sociopolitical contexts of the school-to-prison pipeline from leadership perspectives and other issues related to school discipline policy reforms.

Michelle DeVall is a music teacher in the Austin Independent School District. Her research interests include equity and access to the fine arts and school choice.

Footnotes

Supplemental Material

Supplemental material for this article is available online.

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

1.

The omitted studies include Kasman and Loeb (2013) and Welsch and Zimmer (2012). Kasman and Loeb (2013) found a positive relationship between principals’ perceptions of competition and student achievement. Welsch and Zimmer (2012) found that principals’ perceptions of competition were related to student achievement. The following studies were included, but we had to omit some results: Holmes, DeSimone, and Rupp (2003) and Jha (2013).

2.

For Kentucky studies, we prioritized Borland and Howsen’s (2000) paper, excluding prior versions that use overlapping years of data. For Michigan studies, we prioritized Bettinger (2005) over Bettinger (1999) and Ni (2009) over Ni (2007). We then took the average of Ni (2009) and Bettinger (2005) given that they used overlapping years of data. In North Carolina, we took the average of Bifulco and Ladd (2006), Holmes and DeSimone (2003), and Jinnai (2014). In Georgia, we used Geller’s (2006) peer-reviewed article instead of the older dissertation (Geller, 2000). In Chicago, we prioritized Kamienski (2011) over Kamienski (2008), and took the average of Kamienski (2011) and Snyder (2011). In Milwaukee, we took the average of Greene and Marsh (2009), Nisar (2012), Chakrabarti (2008a), and Greene and Forster (2002). In Mississippi, we kept Misra et al. (2012), the peer-reviewed article, and dropped the dissertation (Misra et al., 2010). In Florida, for student-level studies, we took the average of Sass (2006), Chakrabarti (2008a, 2008b, 2013—we only coded new tables from each), Figlio and Hart (2010), Rouse (2013), and West and Peterson (2006). For school-level studies in Florida, we took the average of Figlio and Rouse (2006), Forster (2008), Greene and Winters (2003, 2008), Greene (2001), and Chakrabarti (2013). For district-level studies in Florida, we took the average of Maranto et al. (2000) and Smith and Meier (1995).

References

  1. Abdulkadiroglu A, Pathak PA, & Walters CR (2015). School vouchers and student achievement: Evidence from the Louisiana Scholarship Program. National Bureau of Economic Research; Cambridge, MA. [Google Scholar]
  2. Ahn S, Ames AJ, & Myers ND (2012). A review of meta-analyses in education. Review of Educational Research, 82, 436–476. [Google Scholar]
  3. Aloe AM, & Thompson CG (2013). The synthesis of partial effect sizes. Journal of the Society for Social Work and Research, 4(4), 390–405. [Google Scholar]
  4. Aloe AM (2014). An empirical investigation of partial effect sizes in meta-analysis of correlational data. The Journal of General Psychology, 141(1), 47–64. [DOI] [PubMed] [Google Scholar]
  5. Aloe AM (2015). Inaccuracy of regression results in replacing bivariate correlations. Research Synthesis Methods, 6(1), 21–27. [DOI] [PubMed] [Google Scholar]
  6. Anderson K, & Wolf P. (2017). Evaluating school vouchers: Evidence from a withinstudy comparison (EDRE Working Paper No. 2017–10). doi: 10.2139/ssrn.2952967 [DOI] [Google Scholar]
  7. Arsen D, & Ni Y. (2012). The effects of charter school competition on school district resource allocation. Educational Administration Quarterly, 48, 3–38. [Google Scholar]
  8. Bagley C. (2006). School choice and competition: A public-market in education revisited. Oxford Review of Education, 32(3), 347–362. [Google Scholar]
  9. Belfield C, & Levin H. (2002). The effects of competition between schools on educational outcomes: A review for the United States. Review of Educational Research, 72, 279–341. [Google Scholar]
  10. Bettinger EP (2005). The effect of charter schools on charter students and public schools. Economics of Education Review, 24(2), 133–147. [Google Scholar]
  11. Berends M. (2015). Sociology and school choice: What we know after two decades of charter schools. Annual Review of Sociology, 41, 159–180. [Google Scholar]
  12. Bettinger E. (1999). The effect of charter schools on charter students and public schools. National Center for Study of Privatization in Education, Teachers College, Columbia University. [Google Scholar]
  13. Borland MV, & Howsen RM (2000). Manipulable Variables of Policy Importance: The Case of Education. Education Economics, 8(3), 241–248. [Google Scholar]
  14. Betts J. (2009). The competitive effects of charter schools on traditional public schools. In Berends M, Springer MG, Ballou D, & Walberg H. (Eds.), Handbook of research on school choice (pp. 195–208). New York, NY: Routledge. [Google Scholar]
  15. Booker K, Gilpatric SM, Gronberg T, & Jansen D. (2008). The effect of charter schools on traditional public school students in Texas: Are children who stay behind left behind? Journal of Urban Economics, 64(1), 123–145. doi: 10.1016/j.jue.2007.10.003 [DOI] [Google Scholar]
  16. Bohte J. (2004). Examining the Impact of Charter Schools on Performance in Traditional Public Schools. Policy Studies Journal, 32(4), 501–520. doi: 10.1111/j.1541-0072.2004.00078.x [DOI] [Google Scholar]
  17. Bifulco R, & Ladd HF (2006). The impacts of charter schools on student achievement: Evidence from North Carolina. Education Finance and Policy, 1(1), 50–90. doi: 10.1162/edfp.2006.1.1.50 [DOI] [Google Scholar]
  18. Bulkley K, & Fisler J. (2003). A decade of charter schools: From theory to practice. Educational Policy, 17, 317–342. doi: 10.1177/0895904803017003002 [DOI] [Google Scholar]
  19. Carlson D, Lavery L, & Witte J. (2012). Charter school governance and student outcomes. Economics of Education Review, 31, 254–267. [Google Scholar]
  20. Carr M, & Ritter G. (2007). Measuring the competitive effect of charter schools on student achievement in Ohio’s traditional public schools. National Center for the Study of Privatization in Education (Columbia University) Research Paper, 146. [Google Scholar]
  21. Carr M. (2011). The Impact of Ohio’s edchoice on traditional public school performance. Cato Journal, 31, 257. [Google Scholar]
  22. Chakrabarti R. (2008a). Can increasing private school participation and monetary loss in a voucher program affect public school performance? Evidence from Milwaukee. Journal of Public Economics, 92(5–6), 1371–1393. doi: 10.1016/j.jpubeco.2007.06.009 [DOI] [Google Scholar]
  23. Chakrabarti R. (2008b). Impact of Voucher Design on Public School Performance: Evidence from Florida and Milwaukee Voucher Programs (SSRN Scholarly Paper No. ID 1086772). Retrieved from Social Science Research Network website: http://papers.ssrn.com/abstract=1086772
  24. Chakrabarti R. (2013). Do vouchers lead to sorting under random private school selection? Evidence from the Milwaukee voucher program. Economics of Education Review, 34, 191–218. doi: 10.1016/j.econedurev.201301.009 [DOI] [Google Scholar]
  25. Chetty R, Hendren N, Kline P, Saez E, & Turner N. (2014). Is the United States still a land of opportunity? Recent trends in intergenerational mobility. The American Economic Review, 104, 141–147. [Google Scholar]
  26. Chubb JE, & Moe TM (1990). Politics, markets, and America’s schools. Washington, DC: Brookings Institution Press. [Google Scholar]
  27. Cook RD (1979). Influential observations in linear regression. Journal of the American Statistical Association, 74, 169–174. [Google Scholar]
  28. Cooper H. (1998). Synthesizing research: A guide for literature reviews (3rd ed.). Thousand Oaks, CA: SAGE. [Google Scholar]
  29. Cooper H, Hedges LV, & Valentine JC (Eds.). (2009). The handbook of research synthesis and meta-analysis (2nd ed.). New York, NY: Russell Sage Foundation. [Google Scholar]
  30. Cordes SA (2018). In pursuit of the common good: The spillover effects of charter schools on public school students in New York City. Education Finance and Policy, 13, 484–512. doi: 10.1162/edfp_a_00240 [DOI] [Google Scholar]
  31. Cowen JM (2008). School choice as a latent variable: Estimating the “complier average causal effect” of vouchers in charlotte. Policy Studies Journal, 36, 301–315. doi: 10.1111/j.1541-0072200800268x [DOI] [Google Scholar]
  32. Cowen JM, Fleming DJ, Witte JF, & Wolf PJ (2012). Going public: Who leaves a large, longstanding and widely available urban voucher program? American Educational Research Journal, 49, 231–256. [Google Scholar]
  33. Creed B. (2015). Different measures, different results: Understanding the impact of multiple school competition measures in the literature. Paper presented at the Annual Meeting of the American Educational Research Association, San Antonio, TX April 18 2015. [Google Scholar]
  34. DeAngelis CA (2017). Do self-interested schooling selections improve society? A review of the evidence. Journal of School Choice, 11, 546–558. [Google Scholar]
  35. DeAngelis CA, & Holmes Erickson H. (2018). What leads to successful school choice programs: A review of the theories and evidence. Cato Journal, 38, 247–263. [Google Scholar]
  36. DeAngelis CA, & Wolf PJ (2016). The school choice voucher: A “get out of jail” card? (EDRE Working Paper No. 2016–03). Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2743541
  37. Deming DJ (2011). Better schools, less crime? Quarterly Journal of Economics, 126, 2063–2115. [Google Scholar]
  38. Dills AK, & Hernández-Julián R. (2011). More choice, less crime. Education Finance and Policy, 6, 246–266. [Google Scholar]
  39. Dobbie W, & Fryer RG (2015). The medium-term impacts of high-achieving charter schools. Journal of Political Economy, 123, 985–1037. [Google Scholar]
  40. Doucouliagos H. (2011). How large is large? Preliminary and relative guidelines for interpreting partial correlations in economics (Working Paper). Faculty of Business and Law, Deakin University, Geelong, Victoria, Australia. Retrieved from http://dro.deakin.edu.au/view/DU:30106880 [Google Scholar]
  41. Dynarski M, Rui N, Webber A, & Gutmann B. (2018). Evaluation of the DC opportunity scholarship program: Impacts two years after students applied (NCEE 2018–4010). National Center for Education Evaluation and Regional Assistance. Retrieved from https://ies.ed.gov/ncee/pubs/20184010/pdf/20184010pdf [Google Scholar]
  42. Egalite AJ (2013). Measuring competitive effects from school voucher programs: A systematic review. Journal of School Choice, 7, 443–464. [Google Scholar]
  43. Egalite AJ (2016). The competitive effects of the Louisiana scholarship program on public school performance. Retrieved from https://ssrn.com/abstract=2739783 [Google Scholar]
  44. Figlio DN, & Rouse CE (2006). Do accountability and voucher threats improve low-performing schools? Journal of Public Economics, 90(1–2), 239–255. 10.1016/j.jpubeco.2005.08.005 [DOI] [Google Scholar]
  45. Figlio DN, Hart CMD, & National Bureau of Economic Research. (2010). Competitive Effects of Means-Tested School Vouchers. NBER Working Paper No. 16056. National Bureau of Economic Research. Retrieved from eric. (National Bureau of Economic Research. 1050 Massachusetts Avenue, Cambridge, MA: 02138–5398. Tel: 617–588-0343; Web site: http://www.nber.org) [Google Scholar]
  46. Figlio D, & Hart CMD (2011). Does Competition Improve Public Schools? New Evidence from the Florida Tax-Credit Scholarship Program. Education Next, 11(1), 7–80. [Google Scholar]
  47. Figlio D, Hart C, & Metzger M. (2010). Who uses a means-tested scholarship, and what do they choose? Economics of Education Review, 29, 301–317. doi: 10.1016/j.econedurev.200908.002 [DOI] [Google Scholar]
  48. Figlio D, & Karbownik K. (2016). Evaluation of Ohio’s EdChoice scholarship program: Selection, competition, and performance effects. Thomas B. Fordham Institute. Retrieved from https://edex.s3-us-west-2.amazonaws.com/publication/pdfs/FORDHAM%20Ed%20Choice%20Evaluation%20Report_online%20edition.pdf [Google Scholar]
  49. Fleming DJ, Cowen JM, Witte JF, & Wolf PJ (2015). Similar students, different choices: Who uses a school voucher in an otherwise similar population of students? Education and Urban Society, 47, 785–812. doi: 10.1177/0013124513511268 [DOI] [Google Scholar]
  50. Foreman LM (2017). Educational attainment effects of public and private school choice. Journal of School Choice, 11, 642–654. [Google Scholar]
  51. Forster G. (2008). Lost opportunity: an empirical analysis of how vouchers affected florida public schools. School Choice Issues in the State. Friedman Foundation for Educational Choice. Retrieved from http://eric.ed.gov/?id=ED508458 [Google Scholar]
  52. Frankenberg E, & Lee C. (2003, September 5). Charter schools and race: A lost opportunity for integrated education. Education Policy Analysis Archives, 11(32). Retrieved from https://epaa.asu.edu/ojs/article/view/260/386 [Google Scholar]
  53. Friedman M. (1955). The role of government in education. Collected Works of Milton Friedman Project records. Stanford, CA: Hoover Institution Archives. Retrieved from https://miltonfriedman.hoover.org/objects/58044/the-role-of-governmentin-education [Google Scholar]
  54. Geller CR (2000). Private schools and public quality: An analysis of the effects of private schools on public school performance (Ph.D.). Georgia State University, Ann Arbor. Retrieved from ProQuest Dissertations & Theses Full Text. (304592221) [Google Scholar]
  55. Geller CR, Sjoquist DL, & Walker MB (2006). The effect of private school competition on public school performance in Georgia. Public Finance Review, 34(1), 4–32. [Google Scholar]
  56. Gill B, & Booker K. (2015). School competition and student outcomes. In Ladd H. & Fiske EB (Eds.), Handbook of research in education finance and policy (2nd ed.). pp. 211–227 New York, NY: Routledge. [Google Scholar]
  57. Goldhaber DD, & Eide ER (2003). Methodological thoughts on measuring the impact of private sector competition on the educational marketplace. Educational Evaluation and Policy Analysis, 25, 217–232. [Google Scholar]
  58. Greene JP, Peterson PE, & Du J. (1999). Effectiveness of school choice: The Milwaukee experiment. Education and Urban Society, 31, 190–213. doi: 10.1177/0013124599031002005 [DOI] [Google Scholar]
  59. Greene JP, & others. (2001). An evaluation of the Florida A-Plus accountability and school choice program. Center for Civic Innovation at the Manhattan Institute. [Google Scholar]
  60. Greene JP, Winters MA, & Manhattan Inst., N. Y. NY. Center for Civic Innovation. (2003). When schools compete: the effects of vouchers on florida public school achievement. Education Working Paper. Retrieved from: http://www.manhattan-institute.org. [Google Scholar]
  61. Greene JP, & Winters MA (2008). The effect of special-education vouchers on public school achievement: Evidence from Florida’s McKay Scholarship Program. Manhattan Institute; New York, NY. Retrieved from http://www.manhattan-institute.org/pdf/Effect_of_Vouchers_for_SE_Students_on_Public_School_Achievement_2-19-08.pdf [Google Scholar]
  62. Greene JP, Marsh RH, & University of Arkansas, S. C. D. P. (SCDP). (2009). The effect of milwaukee’s parental choice program on student achievement in milwaukee public schools. SCDP Comprehensive Longitudinal Evaluation of the Milwaukee Parental Choice Program. Report #11. School Choice Demonstration Project. Retrieved from: http://www.uark.edu/ua/der/SCDP.html [Google Scholar]
  63. Greene JP, Forster G, & Manhattan Inst., N. Y. NY. Center for Civic Innovation. (2002). Rising to the Challenge: The Effect of School Choice on Public Schools in Milwaukee and San Antonio. Civic Bulletin. Retrieved from: http://www.man-hattan-institute.org/cb_27.pdf. [Google Scholar]
  64. Hamlin D. (2017). Are charter schools safer in deindustrialized cities with high rates of crime? Testing hypotheses in Detroit. American Educational Research Journal, 54, 725–756. [Google Scholar]
  65. Hart C. (2014). Contexts matter: Selection in means-tested school voucher programs. Educational Evaluation and Policy Analysis, 36, 186–206. doi: 10.3102/0162373713506039 [DOI] [Google Scholar]
  66. Hattie J, Rogers HJ, & Swaminathan H. (2014). The role of meta-analysis in educational research. In Reid AD & Peters MA (Eds.), A companion to research in education (pp. 197–207). Dordrecht, The Netherlands: Springer. [Google Scholar]
  67. Hedges LV (1983). A random effects model for effect sizes. Psychological Bulletin, 93, 388–395. [Google Scholar]
  68. Hedges LV, & Olkin I. (1985). Statistical methods for meta-analysis. Orlando, FL: Academic Press. [Google Scholar]
  69. Hedges LV, & Vevea JL (1998). Fixed-and random-effects models in meta-analysis. Psychological Methods, 3, 486–504. [Google Scholar]
  70. Hedges LV, Tipton E, & Johnson MC (2010). Robust variance estimation in meta-regression with dependent effect size estimates. Research Synthesis Methods, 1(1), 164–165. [DOI] [PubMed] [Google Scholar]
  71. Henry KL, & Dixson AD (2016). “Locking the door before we got the keys”: Racial realities of the charter school authorization process in post-Katrina New Orleans. Educational Policy, 30, 218–240. doi: 10.1177/0895904815616485 [DOI] [Google Scholar]
  72. Holmes GM, DeSimone J, & Rupp NG (2003). Does School Choice Increase School Quality? (Working Paper No. 9683). National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w9683 [Google Scholar]
  73. Hoxby CM (1994). Do Private Schools Provide Competition for Public Schools? (Working Paper No. 4978). National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w4978 [Google Scholar]
  74. Hoxby CM (2000). Does Competition among Public Schools Benefit Students and Taxpayers? The American Economic Review, 90(5), 1209–1238 [Google Scholar]
  75. Hoxby CM (2003). School choice and school productivity. Could school choice be a tide that lifts all boats? In The economics of school choice (pp. 287–342). University of Chicago Press. [Google Scholar]
  76. Howell WG (2004). Dynamic selection effects in means-tested, urban school voucher programs. Journal of Policy Analysis and Management, 23, 225–250. doi: 10.1002/pam.20002 [DOI] [Google Scholar]
  77. Howell WG, & Peterson PE (2002). The education gap. Washington, DC: Brookings Institution Press. [Google Scholar]
  78. Howell WG, Wolf PJ, Campbell DE, & Peterson PE (2002). School vouchers and academic performance: Results from three randomized field trials. Journal of Policy Analysis and Management, 21, 191–217. doi: 10.1002/pam.10023 [DOI] [Google Scholar]
  79. Hsieh C, & Urquiola M. (2003). When schools compete, how do they compete? An assessment of Chile’s nationwide voucher program (NBER Working Paper). Retrieved from https://www.nber.org/papers/w10008
  80. Imberman SA (2007). The effect of charter schools on non-charter students: An instrumental variables approach. New York: National Center for the Study of Privatization in Education. Retrieved from http://virww.ncspe.org/publications_files/OP149.pdf [Google Scholar]
  81. Imberman SA (2011). The effect of charter schools on achievement and behavior of public school students. Journal of Public Economics, 95(7–8), 850–863. 10.1016/j.jpubeco.2011.02.003 [DOI] [Google Scholar]
  82. Jackson CK (2012). School competition and teacher labor markets: Evidence from charter school entry in North Carolina. Journal of Public Economics, 96, 431–448. doi: 10.1016/j.jpubeco.201112.006 [DOI] [Google Scholar]
  83. Jha NK (2013). Roles of school district competition and political institutions in public school spending and student achievement (Ph.D.). The University of North Carolina at Charlotte, Ann Arbor. Retrieved from ProQuest Dissertations & Theses Full Text. (1494494106) [Google Scholar]
  84. Jinnai Y. (2014). Direct and indirect impact of charter schools’ entry on traditional public schools: New evidence from North Carolina. Economics Letters, 124(3), 452–456. doi: 10.1016/j.econlet.2014.07.016 [DOI] [Google Scholar]
  85. Kamienski AL (2008). Competition within the market for publicly-funded education: An investigation of the impacts of charter schools on the academic achievement of elementary students attending both charter and traditional public schools within the Chicago Public Schools system (Ph.D, Loyola University Chicago; ). Retrieved from http://ezproxy.lib.utexas.edu/login?url=http://search.proquest.com/docview/304558321?accountid=7118 [Google Scholar]
  86. Kamienski A. (2011). Competition: Charter and Public Elementary Schools in Chicago. Journal of School Choice, 5(2), 161–181. Retrieved from eric. (Routledge. Available from: Taylor & Francis, Ltd. 325 Chestnut Street Suite 800, Philadelphia, PA 19106. Tel: 800–354-1420; Fax: 215–625-2940; Web site: http://www.tandf.co.uk/journals) [Google Scholar]
  87. Kisa Z, Dyehouse M, Park T, Andrews-Larson B, & Herrington C. (2017). Evaluation of the Florida tax credit scholarship program, compliance, and test scores in 2015–16 (FL DOE Report No. 1516). Retrieved from http://www.fldoe.org/core/fileparse.php/5606/urlt/FTC_Report1516pdf
  88. Levin HM (2012). Some economic guidelines for design of a charter school district. Economics of Education Review, 31, 331–343. [Google Scholar]
  89. Linick MA (2014). Measuring competition: Inconsistent definitions, inconsistent results. Education Policy Analysis Archives, 22(16). Retrieved from https://epaa.asu.edu/ojs/article/view/1418 [Google Scholar]
  90. Loeb S, Valant J, & Kasman M. (2011). Increasing choice in the market for schools: Recent reforms and their effects. National Tax Journal, 64, 141–164. [Google Scholar]
  91. Lubienski C. (2007). Marketing schools. Education and Urban Society, 40, 118–141. [Google Scholar]
  92. Martinez VJ, Godwin RK, Kemerer FR, & Perna L. (1995). The Consequences of School Choice: Who Leaves and Who Stays in the Inner City. Social Science Quarterly, 76(3), 485–501. Retrieved from JSTOR. [Google Scholar]
  93. Maranto R, Milliman S, & Stevens S. (2000). Does Private School Competition H-arm Public Schools? Revisiting Smith and Meier’s The Case Against School Choice. Political Research Quarterly, 53(1), 177–192. doi: 10.1177/106591290005300109 [DOI] [Google Scholar]
  94. Maranto R, Milliman S, & Hess F. (2010). How traditional public schools respond to competition: The mitigating role of organizational culture. Journal of School Choice, 4(2), 113–136. [Google Scholar]
  95. Metcalf K, West SD, Legan N, Paul K, & Boone WJ (2003). Evaluation of the Cleveland scholarship and tutoring program: Student characteristics and academic achievement technical report 1998–2002. Bloomington: Indiana Center for Evaluation. [Google Scholar]
  96. Mills JN, & Wolf PJ (2017). Vouchers in the Bayou: The Effects of the Louisiana Scholarship Program on Student Achievement After 2 Years. Educational Evaluation and Policy Analysis, 39(3), 464–484. doi: 10.3102/0162373717693108 [DOI] [Google Scholar]
  97. Misra K, Grimes PW, & Rogers KE (2012). Does competition improve public school efficiency? A spatial analysis. Economics of Education Review, 31(6), 1177–1190. [Google Scholar]
  98. Ni Y. (2007) School Efficiency, Social Stratification, and School Choice: An Examination of Michigan’s Charter School Program. Dissertation 1–165, Michigan, USA: UMI [Google Scholar]
  99. Ni Y. (2009). The impact of charter schools on the efficiency of traditional public schools: Evidence from Michigan. Economics of Education Review, 28(5), 571–584. [Google Scholar]
  100. Ni Y, & Arsen D. (2010). The competitive effects of charter schools on public school districts. In Lubienski CA & Weitzel PC (Eds.), The charter school experiment: Expectations, evidence, and implications (pp. 93–120). Cambridge, MA: Harvard Education Press. [Google Scholar]
  101. Nisar HD (2012). Heterogeneous competitive effects of charter schools in Milwaukee. NCSPE Occasional Paper # 202. [Google Scholar]
  102. Pham LD, Nguyen TD, & Springer MG (2017, June). Teacher merit pay and student test scores: A meta-analysis. Working Paper. University of Tennessee, Peabody College of Education. [Google Scholar]
  103. Paul K, Legan N, & Metcalf K. (2007). Differential entry into a voucher program longitudinal examination of families who apply to and enroll in the Cleveland scholarship and tutoring program. Education and Urban Society, 39, 223–243. [Google Scholar]
  104. Plucker J, Muller P, Hansen J, Ravert R, & Make M. (2006, February 9) Evaluation of the Cleveland scholarship and tutoring program: Technical report 1998–2004. Bloomington: Center for Evaluation & Educational Policy, Indiana University. [Google Scholar]
  105. Reardon S. (2011). The widening academic achievement gap between the rich and the poor: New evidence and possible explanations. In Murnane R. & Duncan GJ (Eds.), Whither opportunity? Rising inequality, schools, and children’s life chances (pp. 91–115). New York, NY: Russell Sage Foundation. [Google Scholar]
  106. Rouse CE (1998). Private school vouchers and student achievement: An evaluation of the Milwaukee parental choice program. The Quarterly Journal of Economics, 113, 553–602. [Google Scholar]
  107. Rouse CE, Hannaway J, Goldhaber D, & Figlio D. (2013). Feeling the Florida Heat? How Low-Performing Schools Respond to Voucher and Accountability Pressure. American Economic Journal: Economic Policy, 5(2), 251–281. doi: 10.1257/pol.5.2.251 [DOI] [Google Scholar]
  108. Sass TR (2006). Charter schools and student achievement in Florida. Education Finance and Policy, 1(1), 91–122. [Google Scholar]
  109. Scott J, & Holme JJ (2016). The political economy of market-based educational policies: Race and reform in urban school districts, 1915 to 2016. Review of Research in Education, 40, 250–297. [Google Scholar]
  110. Shadish WR, Cook TD, & Campbell DT (2001). Experimental and quasi-experimental designs for generalized causal inference (2nd ed.). Belmont, CA: Wadsworth Publishing. [Google Scholar]
  111. Shakeel MD, & DeAngelis CA (2018). Can private schools improve school climate? Evidence from a nationally representative sample. Journal of School Choice, 12, 426–445. [Google Scholar]
  112. Smith KB, & Meier KJ (1995). Public Choice in Education: Markets and the Demand for Quality Education. Political Research Quarterly, 48(3), 461–478. 10.1177/106591299504800301 [DOI] [Google Scholar]
  113. Snyder S. (2011). Spatial dynamics of urban development: School competition and public housing policy (Ph.D.). Purdue University, Ann Arbor. Retrieved from ProQuest Dissertations & Theses Full Text. (1014166224) [Google Scholar]
  114. Swanson E. (2017). Can we have it all? A review of the impacts of school choice on racial integration. Journal of School Choice, 11, 507–526. [Google Scholar]
  115. Tipton E. (2014) Small sample adjustments for robust variance estimation with meta-regression. Psychological Methods, 20(3): 375–393. [DOI] [PubMed] [Google Scholar]
  116. Tipton E, & Pustejovsky JE (2015). Small-sample adjustments for tests of moderators and model fit using robust variance estimation in meta-regression. Journal of Educational and Behavioral Statistics, 40(6), 604–634. [Google Scholar]
  117. Tuttle CC, Gleason P, Knechtel V, Nichols-Barrer I, Booker K, Chojnacki G, & Goble L. (2015). Understanding the effect of KIPP as it scales. Volume I, Impacts on achievement and other outcomes. Washington, DC: Mathematica Policy Research. [Google Scholar]
  118. Waddington RJ, & Berends M. (2018). Impact of the Indiana Choice Scholarship Program: Achievement Effects for Students in Upper Elementary and Middle School. Journal of Policy Analysis and Management, 37(4), 783–808. 10.1002/pam.22086 [DOI] [Google Scholar]
  119. West MR, & Peterson PE (2006). The Efficacy of Choice Threats Within School Accountability Systems: Results from Legislatively Induced Experiments. The Economic Journal, 116(510), C46–C62. 10.1111/j.1468-0297.2006.01075.x [DOI] [Google Scholar]
  120. Witte JF, Wolf PJ, Cowen JM, Fleming DJ, & Lucas-McLean J. (2008). MPCP longitudinal educational growth study: Baseline report. SCDP Milwaukee Evaluation Report# 5 (School Choice Demonstration Project; ). Retrieved from https://eric.ed.gov/?id=ED508635 [Google Scholar]
  121. Wohlstetter P, Smith J, & Farrell CC (2013). Choices and challenges: Charter school performance in perspective. Cambridge, MA: Harvard Education Press. [Google Scholar]
  122. Wolf PJ (2007). Civics exam schools of choice boost civic values. Education Next, 7(3), 66–72. [Google Scholar]
  123. Wolf PJ, Kisida B, Gutmann B, Puma M, Eissa N, & Rizzo L. (2013). School vouchers and student outcomes: Experimental evidence from Washington, DC. Journal of Policy Analysis and Management, 32, 246–270. [Google Scholar]
  124. Wong KK, & Shen FX (2002). Politics of state-led reform in education: Market competition and electoral dynamics. Educational Policy, 16, 161–192. [Google Scholar]
  125. Zimmer R, & Buddin R. (2009). Is charter school competition in California improving the performance of traditional public schools? Public Administration Review, 69(5), 831–845. 10.1111/j.1540-6210.2009.02033.x [DOI] [Google Scholar]
  126. Zimmer R, Gill B, Attridge J, & Obenauf K. (2014). Charter school authorizers and student achievement. Education Finance and Policy, 9(1), 59–85. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Online appendices

RESOURCES