Abstract
Although data-based individualization (DBI) has positive effects on learning outcomes for students with learning difficulties, this framework can be difficult for teachers to implement due to its complexity and contextual barriers. The first aim of this synthesis was to investigate the effects of ongoing professional development (PD) support for DBI on teachers’ DBI knowledge, skills, beliefs, and fidelity and the achievement of preschool to Grade 12 students with academic difficulties. The second aim was to report on characteristics of this support and explore whether features were associated with effects. We identified 26 studies, 16 and 22 of which examined teacher and student outcomes, respectively. Meta-analyses indicated that the weighted mean effect size for DBI with ongoing support for teachers was g = 0.86 (95% confidence interval [CI] = [0.43, 1.28], p < .001, I2 = 83.74%, k = 46) and g = 0.31 for students (95% CI = [0.19, 0.42], p < .001, I2 = 61.38%, k = 103). We did not identify moderators of treatment effects. However, subset effects were descriptively larger for ongoing support that targeted data-based instructional changes or included collaborative problem-solving. Researchers may improve future DBI PD by focusing on support for teachers’ instructional changes, describing support practices in greater detail, and advancing technological supports.
Keywords: professional development, meta-analysis, response to intervention
Although many standardized instructional interventions are effective for most students, they may not produce desired outcomes for students with persistent learning difficulties (Al Otaiba & Fuchs, 2006). These students often benefit from interventions that are individualized to their learning needs (L. S. Fuchs et al., 2021; Jung et al., 2018). One framework that teachers can use to evaluate and modify interventions is data-based individualization (DBI; D. Fuchs et al., 2014; National Center on Intensive Intervention [NCII], 2013), originally termed data-based program modification (Deno & Mirkin, 1977).
During DBI, teachers monitor students’ progress in a target skill, often using curriculum-based measurement (CBM), which provides reliable and valid general indicators of academic performance and growth (Deno, 1985). Teachers administer CBM or other brief, equivalent measures frequently to monitor student progress toward a goal and graph students’ progress-monitoring data over time. They then use graphed data to engage in data-based decision-making (DBDM; Espin et al., 2017; Espin, Förster & Mol, 2021; McMaster, Lembke, et al., 2020; van den Bosch et al., 2017). In DBDM, teachers interpret students’ graphs by comparing their actual and expected rate of improvement toward a goal. Then, teachers apply decision rules: continue the intervention (if the slope is in line with goal line), raise the goal (if the slope is above goal line), or change instruction (if the slope is below goal line; Stecker et al., 2005). If a change is needed, teachers hypothesize why students did not make expected progress, synthesizing their knowledge about the student, context, and effective instructional practices. Teachers then select, plan, and make an instructional change—meaning, an adaptation intended to individualize instruction. These changes can include aligning lesson content with students’ needs or applying intensification strategies such as increased behavioral support (e.g., Danielson & Rosenquist, 2014; L. S. Fuchs et al., 2017).
A considerable body of evidence indicates that the DBI process positively impacts the learning outcomes of students with disabilities and learning difficulties. In a narrative synthesis of 12 studies, Stecker et al. (2005) found that students whose teachers used DBDM to adapt instruction made more academic growth than students whose teachers did not. Teachers had more positive attitudes toward CBM and were more effective at making instructional changes when aspects of the DBDM process were automated or supported with computer programs. These computerized DBDM programs targeted graphing, graph interpretation, decision rule application, and planning instructional changes (e.g., L. S. Fuchs et al., 1992). However, teachers had trouble planning instructional changes without instructional recommendations or researcher consultation. The authors concluded that advancements in computerized DBDM would benefit teachers, but teachers may need ongoing, person-to-person support to make data-based instructional changes.
Jung et al. (2018) extended Stecker et al.’s (2005) synthesis by conducting a meta-analysis of the effects of DBI, implemented by teachers or research team members, for students with disabilities. Across 14 studies, DBI had positive effects on students’ math, reading, and writing—namely, spelling—outcomes. Effects were similar whether implementers received student mastery reports or recommendations for instructional changes (g = 0.38, 95% confidence interval [CI] = [0.15, 0.61]) or less extensive support (g = 0.37, 95% CI = [0.17, 0.58]). Effects were also similar across academic areas. Through subset effect size calculation, the authors were able to descriptively compare effect sizes for specific characteristics of support. Effect sizes for studies where graph interpretation was conducted by DBI implementers or computerized DBDM were similar. Small-group collaborative support with individual consultation had a larger effect size than individual consultation alone. In terms of frequency, support that was delivered weekly had the largest effect. In this meta-analysis, all but one study provided ongoing support to DBI implementers; as a result, the findings indicated positive effects of DBI on student outcomes when ongoing support was provided. The authors called for more research on the effects of various types of ongoing support.
Most recently, Gesel et al. (2021) conducted a meta-analysis of the effects of DBDM professional development (PD) on teachers’ knowledge, skills, and self-efficacy related to CBM and DBDM. One aim of this meta-analysis was to examine how to structure PD protocols to better support teachers’ use of DBDM, so that positive effects on student achievement could be realized. The authors operationally defined DBDM as any systematic process of collecting student data to inform instruction, including DBI. Across 28 studies, PD had a positive effect (g = 0.57, 95% CI = [0.29, 0.85]). Effects were not moderated by study quality, but skills-based outcomes were associated with smaller effects. The duration of PD treatments ranged from 25 min to 1 year. Given that most studies included extensive PD, the authors recommended that future researchers identify ways to support teachers’ effective use of DBI that would be feasible in real-world school settings.
Ongoing Support for DBI
Findings from Stecker et al. (2005) and Jung et al. (2018) suggest that ongoing support may be necessary for teachers to successfully implement DBI. In Desimone’s (2009) conceptual framework, ongoing support is PD delivered over a sustained duration, or contact time with trainers that extends throughout program implementation. This ongoing support may include coaching, follow-up trainings, or meetings with fellow educators. Teacher training theory posits that these meetings that extend beyond initial training allow teachers to integrate knowledge and skills into practice (Joyce & Showers, 1981). Data-based individualization is an iterative, individualized process, meaning that teachers must interpret and use unique student data throughout implementation, applying their knowledge and skills related to DBI in new ways. In DBI PD theory, ongoing support provides active learning opportunities for teachers to increase their DBI knowledge, skills, beliefs, and fidelity, and thereby improve student outcomes (Lembke et al., 2018). Across educational interventions, ongoing teacher support has been found to increase related teacher and student outcomes (e.g., Kretlow & Bartholomew, 2010).
Although teachers may need ongoing support to use DBI, the findings of all three DBI meta-analyses indicate a continued need to identify ways to optimize supports for teachers’ use of DBI. Over the past decade, there have been national efforts to bring DBI to scale and increased evidence of teacher-, school-, and district-level capacity to implement DBI (Shanahan et al., 2024; Kearns et al., 2022; Lemons et al., 2019; NCII, 2013; Powell et al., 2021). However, teachers’ difficulties implementing and sustaining some steps of the DBI process, namely, planning and enacting data-based instructional changes, have persisted over time (Shanahan et al., 2024; Deno, 2014; L. S. Fuchs et al., 2021; Stecker et al., 2005; Swain & Hagaman, 2020; van den Bosch et al., 2017; Zumeta Edmonds, 2015). These findings suggest that effective and efficient ongoing support protocols for school settings would have considerable value.
Examining the effects of ongoing DBI support for teachers may inform the design of effective DBI PD. Previous DBI syntheses have not identified a specific summary effect of ongoing support for teachers implementing DBI on the academic outcomes of students with and without disabilities. In real-world contexts, DBI is implemented by teachers (Deno & Mirkin, 1977) for any student with persistent learning difficulties (NCII, 2013). In addition, these syntheses have not identified a summary effect of ongoing support for DBI on teacher outcomes, including teachers’ more general beliefs about the process, which can mediate relations between PD and student outcomes (Filderman et al., 2022). If ongoing support does produce positive effects for teachers and their students, this would suggest that investments in long-term support are warranted.
Several characteristics of ongoing support for DBI may increase its impact. Researchers can use these characteristics to design and evaluate the effects of less resource-intensive, but still effective, ongoing support for school contexts. First, a content focus on teachers’ design and implementation of data-based instructional changes may lead to larger effects, given teachers’ difficulties with this individualization process (Shanahan et al., 2024; van den Bosch et al., 2017). Second, including computerized DBDM in addition to person-to-person support (e.g., coaching, peer group) may have added benefits (Stecker et al., 2005). Computerized DBDM may free up teachers’ strained time (Swain & Hagaman, 2020), allowing them to focus on planning and instruction (L. S. Fuchs et al., 2021). Computerized DBDM may also reduce the significant costs associated with person-to-person support (Knight, 2012). Third, support that involves collaborative problem-solving (Idol et al., 1995) may be particularly beneficial in the context of DBI (Jung et al., 2018). Data-based individualization is iterative and contextualized; as a result, teachers have valuable knowledge about their students and resources that can inform effective solutions.
Previous DBI syntheses have explored these characteristics narratively (L. S. Fuchs et al., 2021; Stecker et al., 2005) or through subgroup analysis (Jung et al., 2018). Meta-regression allows for the effects of multiple ongoing support characteristics to be investigated simultaneously and can provide evidence that the characteristic is associated with effects (Higgins et al., 2019). In addition, these syntheses did not aim to describe support practices, such as collaborative problem-solving or feedback, and how they were applied to discrete components of DBI. Operationalizing support protocols in this manner can help researchers identify ways to improve, replicate, and scale PD (e.g., Brock et al., 2017; Espin, van den Bosch, et al., 2021; Klingner et al., 2013).
Current Study
The purpose of this synthesis was to investigate whether ongoing support for DBI had positive effects on teacher- and student-level outcomes and to explore features of ongoing support. We built on the findings of three previous DBI-related meta-analyses on teacher (Gesel et al., 2021) and student (Jung et al., 2018; Stecker et al., 2005) effects by narrowing the focus to ongoing support for teachers. We aimed to identify the characteristics of ongoing support provided, and whether several characteristics were associated with stronger effects. First, we calculated average weighted effect sizes of ongoing DBI support for teacher and student outcomes separately. Second, we coded and synthesized how components of DBI have been supported in this literature. Third, we used moderator and subset analyses to examine whether features of ongoing support influenced the magnitude of effects or led to descriptively larger effects. Thus, this synthesis is guided by the following research questions:
Research Question 1 (RQ1): What are the effects of ongoing support for DBI on (a) teachers’ DBI knowledge, skills, beliefs, and fidelity and (b) students’ academic outcomes?
Research Question 2 (RQ2): What are the characteristics of ongoing support provided within DBI research?
Research Question 3 (RQ3): Do teacher- and student-level effects differ based on whether ongoing support included a focus on instructional changes, computerized DBDM, or collaborative problem-solving?
Method
To be included in this synthesis, studies needed to meet these criteria: (a) participants were teachers and their students with disabilities or academic difficulties (or effects were disaggregated for these students); (b) the study was a randomized-controlled trial (RCT) or quasi-experiment (QE) with an ongoing support for DBI treatment; (c) the comparison condition was a business-as-usual (BAU) control, DBI without PD, DBI with initial training outside the context of implementation, or less extensive ongoing support for DBI; (d) data to calculate effect sizes for teacher outcomes related to DBI or student academic outcomes were reported; and (e) the study was a peer-reviewed journal article, doctoral dissertation, or research report.
We operationalized several of our inclusion criteria. Students with academic difficulties needed to be identified based on teacher report or pretest performance. Data-based individualization was defined as instruction in which (a) individual student progress was monitored at least every 3 weeks using CBMs or another general academic outcome measure with evidence of alternate-form reliability (r ≥ .5), and (b) students’ data were graphed over time and used to make at least one instructional decision. Ongoing support was more than one meeting planned by researchers and intended to facilitate teachers’ implementation of DBI with students. Comparison conditions with less extensive ongoing support for DBI were delivered at a lower dosage, targeted fewer DBI components, or used fewer support practices than another treatment. Research reports were research findings published by nonprofit organizations (e.g., NCII).
Our inclusion criteria differed from previous DBI syntheses (Gesel et al., 2021; Jung et al., 2018) in several ways. First, we only included studies with ongoing support for DBI during instruction. Second, we allowed reliable progress-monitoring measures besides CBM. Third, in contrast with Jung et al. (2018), we included students without disabilities. Fourth, in contrast with Gesel et al. (2021), we included teacher beliefs related to DBI as an outcome. These criteria narrowed our focus on ongoing PD but allowed for additional relevant studies to be included.
Search and Study Identification
See Figure 1 for a flowchart of search and study identification procedures. We first completed an electronic search of ERIC, PsycINFO, Education Source, and ProQuest Dissertations and Theses to identify all relevant studies published between January 1977 (the year DBI was conceptualized; Deno & Mirkin, 1977) and November 2022. We adapted search terms (Supplemental Material 1) from previous reviews of teacher PD or DBI research (Brock et al., 2017; Gesel et al., 2021; Jung et al., 2018). Our primary search terms were related to DBI (e.g., data-based decision-making). Our secondary search terms were related to teacher PD in or outside the context of DBI (e.g., training OR expert systems). Our third set of search terms was related to eligible designs (e.g., quasi-experiment). This search yielded 2,233 abstracts. Next, we hand searched articles published after January 2016 in journals that were frequently cited in previous DBI syntheses (Gesel et al., 2021; Jung et al., 2018): Exceptional Children, Journal of Learning Disabilities, Learning Disability Quarterly, Learning Disabilities Research and Practice, Remedial and Special Education, Teaching and Teacher Education, Teacher Education and Special Education, and The Journal of Special Education. This search yielded 20 abstracts. We then reviewed references of relevant research syntheses (Filderman et al., 2018; Gesel et al., 2021; Jung et al., 2018; McMaster, Baker, et al., 2020) and identified 40 abstracts. Last, we reviewed proceedings and contacted presenters from large conferences related to students with high-incidence disabilities—the Council for Exceptional Children Conference (2013–2022), Council for Learning Disabilities Conference (2021–2022), and Pacific Coast Research Conference (2015–2022)—but identified no abstracts. Our searches yielded 1,652 abstracts after de-duplication.
Figure 1.
Diagram of Search Procedures and Results.
The first author screened all abstracts using Rayyan (Ouzzani et al., 2016). Another author screened 20.2% (n = 333); interobserver agreement (IOA) was 93.6%. The two screeners discussed and came to consensus decisions to resolve disagreements, which did not lead to the inclusion of new studies. The first author then reviewed the full texts of 116 studies for eligibility. Another author reviewed full texts of 20.7% of these studies (n = 24), and agreement between the two authors was 100%. Studies were often excluded due to the absence of students with disabilities or academic difficulties (n = 34), or progress monitoring (n = 21). Twenty-five studies met eligibility criteria. We conducted a citation and reference search that yielded no additional studies. After this search, we identified two new records that matched inclusion criteria: a journal article (McMaster et al., forthcoming) and a dissertation that reported two outcomes from the McMaster et al. (forthcoming) study (Shanahan, 2023). We included outcomes from both records but counted them as one study, McMaster et al. (forthcoming). Two included studies used the same set of teacher participants but were conducted separately and counted as two studies (L. S. Fuchs et al., 1990; L. S. Fuchs, Fuchs, Hamlett, & Stecker, 1991). Thus, 26 studies were included in this synthesis.
Coding of Studies
We developed codes for study design, participants, instruction, progress monitoring and DBDM, training and ongoing support, eligible outcomes, and quality. We conducted coding with a Qualtrics survey (Supplemental Material 2). For ongoing support, we coded DBI content focus: instruction, progress monitoring, graphing, graph interpretation or decision rule application, and instructional change planning and implementation. We coded an ongoing support format, which was person-to-person (“person”) or person plus computerized DBDM (“computerized DBDM”). Person support included coaching, consultation, follow-up training, or peer educator support facilitated by researchers. Computerized DBDM, termed “expert systems” or “skills analysis,” automated aspects of DBDM.
We coded for the type of ongoing support practices employed, which were delivered by support providers or computerized DBDM. We adapted PD practice definitions (Brock et al., 2017) to fit the context of DBI. These practices were description of practice (instruction related to DBI purpose or process), modeling (support provider enacted examples of DBI for teacher), performance feedback (reinforcement of correct DBI implementation or suggestions for improvement), planning (teacher or support provider created DBI implementation plans), positive reinforcement (teachers were rewarded for DBI performance), rehearsal (teachers practiced implementing DBI in the context of ongoing support), self-monitoring (teachers were directed to assess their own DBI use), and question and answer (teacher asked questions about DBI, support providers answered). We created “collaborative” and “expert” problem-solving support practice codes using principles from collaborative (Idol et al., 1995) and behavioral (Bergan, 1977) consultation models. Collaborative problem-solving occurred when teachers and support providers worked together to identify problems and solutions related to student progress or DBI implementation. Expert problem-solving occurred when the support provider directed solutions.
We coded the type of outcomes measured, which were student academic outcomes and teacher outcomes: DBI fidelity (instruction or DBDM fidelity or quality), skills (e.g., number of decisions made), knowledge (e.g., content or pedagogical knowledge about DBI), or beliefs (self-efficacy, attitudes toward DBI). We also coded whether outcome measures were standardized or researcher-created.
We rated study quality using the Council for Exceptional Children’s quality indicators for special education group design studies (Cook et al., 2015). We assigned a score of 1 or 0 for each indicator being present or not present, respectively, for a total of 21 to 22 possible points. One indicator was not applicable for studies where treatment fidelity was a dependent variable. We reported quality as a percentage of indicators present in each study.
Before coding, IOA was established between two authors. We calculated IOA as agreement / agreement + disagreement for survey response options displayed to the coder. We separately coded seven studies (24.1%) with an average of 92.5% IOA (range = 87.9%–95.5%) for descriptive coding and 92.7% IOA (range = 81.8%–95.5%) for study quality coding; the first author coded the remaining studies. The first author then extracted data to calculate effect sizes for eligible outcomes using an Excel spreadsheet, first using Excel’s “data from picture” feature, then by hand. The first author resolved all discrepancies between the two entries.
Data Analysis
We calculated separate summary effect sizes for teacher and student outcomes. To account for small sample sizes, we used Hedges’s g as a measure of effect size with the formula , where . We calculated g using adjusted posttest means when provided by authors (What Works Clearinghouse [WWC], 2022). Some studies included three experimental groups: (a) DBI plus more extensive ongoing support, (b) DBI plus less extensive ongoing support, and (c) BAU control. We calculated effect sizes of Group 1 compared with Groups 2 and 3, as well as Group 2 compared with Group 3. We conducted all analyses with the metafor package in R (Viechtbauer, 2010). We calculated summary effects using random effects models weighing study effects by their inverse variances; studies with more participants were given more weight in the models. Given that studies included multiple outcome measures, we used robust variance estimation (RVE; Hedges et al., 2010) to account for effect size dependency. We set ρ = 0.80 as the level of within-study effect correlation. Summary effects and standard errors were robust to varying ρ values. We found evidence of an outlier (i.e., g >|2.5| SD from the average effect size) among teacher effects. A sensitivity analysis of the results with and without the outlier indicated that the summary effect and standard error were robust to its inclusion.
A funnel plot of teacher effect sizes versus standard errors was asymmetrical, but Egger’s regression test did not indicate significant asymmetry (z = 1.38, p = .17). One included study (L. S. Fuchs et al., 1984) did not include data to calculate effect sizes for two nonsignificant teacher outcomes. This missing data may have contributed to asymmetry, and the teacher summary effect must be interpreted with caution. Student effects were symmetrical.
We used ongoing support codes to determine which aspects of DBI were targeted, the format and tools provided as part of ongoing support, and the practices employed to support teachers’ use of DBI. The first author used these codes to write summaries of how authors reported supporting each DBI component, and narratively synthesized summaries.
Prior to conducting meta-regressions, we assessed heterogeneity of study effects using the I2 statistic, , where Q = the sum of the squared deviations of each study effect (Higgins et al., 2003). I2 equaled 83.74% for teacher and 61.38% for student outcomes, suggesting substantial and considerable heterogeneity, respectively (Higgins et al., 2019).
We conducted meta-regression analyses, one for teacher and one for student outcomes, to examine whether effects varied by moderators. Due to the small number of studies, we made two adjustments. First, as planned, both meta-regressions included comparison condition type to control for its effect, but we collapsed comparison conditions into two levels: (a) BAU or DBI without PD and (b) DBI plus training or less extensive ongoing support (“less extensive support”). Second, we did not include all intended moderators: (a) support for instructional changes, (b) computerized DBDM, and (c) collaborative problem-solving. Moderator 1 was included in the teacher model, and Moderators 1 and 2 were included in the student model.
In addition to meta-regressions, we calculated summary effect sizes of variable subsets. The variables were our proposed moderators and outcome type (e.g., teacher DBI fidelity, student math achievement). In exploratory moderator analyses, indicators of study quality were not associated with effects.
Results
Study Characteristics
For a spreadsheet of study coding, see Supplemental Material 3. For a table of study characteristics, see Table 1. Across 26 studies, there were 556 teacher and 1,596 student participants. All studies included in-service teachers, who were often special educators (n = 21), and most supported teachers’ use of DBI in reading (n = 12). Most studies included students in elementary grades (n = 21), and students with learning disabilities (n = 21). Six studies included general education students with academic difficulties. Most studies were peer-reviewed journal articles (n = 23) and RCTs (n = 22). Only five were published after 2005. Study procedures lasted an average of 17.0 weeks (SD = 3.16). Two studies used progress-monitoring measures besides CBM (Förster et al., 2018; Wackerle-Hollman, 2009). Average study quality was 87.8% (SD = 5.1%). Only one study reported validity evidence for a teacher measure (L. S. Fuchs et al., 1984). Teachers’ DBI fidelity was typically evaluated using the Modified Accuracy of Implementation Rating Scale (L. S. Fuchs et al., 1987). The DBI skills measures varied and included the number of instructional adjustments made (L. S. Fuchs, Fuchs, Hamlett, & Allinder, 1991, 1991) and the ambitiousness of CBM goals (Allinder & Beckbest, 1995). About 75.6% of teacher measures were researcher-created. Researchers measured students’ math (k = 56), reading (k = 28), and writing (k = 25) skills; 99.5% of these measures were standardized.
Table 1.
Characteristics of the 26 Studies Reviewed for Teacher Support of Data-Based Individualization.
| Study | Design | Comp. condition a | Quality (%) | Teacher outcomes | Student outcomes | Student grades | Student disability |
|---|---|---|---|---|---|---|---|
| Allinder (1996) | RCT | 0 | 86 | M | 3–6 | EBD, ID, SLD | |
| Allinder & Beckbest (1995) | RCT | 3 | 95 | F, S | M | 2–8 | ID, SLD |
| Allinder et al. (2000) | QE | 0 | 86 | M | NR | ID, SLD | |
| Ergul (2007) | QE | 1 | 81 | R | Pre-K | Inc, NR | |
| Förster et al. (2018) | QE | 0 | 73 | R | 3 | Diff | |
| L. S. Fuchs (1988) | RCT | 3 | 86 | F | W | NR | EBD, SLD |
| L. S. Fuchs & Fuchs (1990) | RCT | 0 | 90 | F, S | M | 2–9 | EBD, SLD |
| L. S. Fuchs & Fuchs (1991) | RCT | 0 | 90 | F, S | W | 2–8 | EBD, SLD |
| L. S. Fuchs & Fuchs (1993) | RCT | 3 | 95 | F | R | 4–9 | EBD, SLD |
| L. S. Fuchs et al. (1984) | RCT | 0 | 91 | F, S | R | 3–7 | EBD, ID, O, TBI |
| L. S. Fuchs et al. (1987) | RCT | 3 | 81 | F | R | 4–9 | EBD, SLD |
| L. S. Fuchs et al. (1988) | RCT | 2 | 86 | F | R, M, W | 3–6 | EBD, SLD |
| L. S. Fuchs et al. (1989a) | RCT | 0 | 90 | F, S | W | 2–6 | EBD, SLD |
| L. S. Fuchs et al. (1989b) | RCT | 0 | 86 | F, S | M | 2–9 | EBD, SLD |
| L. S. Fuchs et al. (1989c) | RCT | 3 | 86 | F, S | R | 3–9 | EBD, SLD |
| L. S. Fuchs et al. (1990) | RCT | 0 | 86 | S | NR | EBD, SLD | |
| L. S. Fuchs, Fuchs, Hamlett, & Allinder (1991) | RCT | 0 | 86 | F, S | M | 2–8 | EBD, SLD |
| L. S. Fuchs, Fuchs, Hamlett, & Stecker (1990) | RCT | 0 | 86 | F, S | W | 3–9 | EBD, SLD |
| L. S. Fuchs et al. (1992) | RCT | 0 | 90 | F, S | R | 2–8 | EBD, SLD |
| L. S. Fuchs et al. (1995) | RCT | 2 | 95 | S | M | 2–4 | SLD |
| Mathes et al. (1998) | RCT | 0 | 91 | S | R | 2–6 | EBD, SLD |
| McCullum (1999) | QE | 3 | 90 | F | R | 2–3 | Diff |
| McMaster, Lembke, et al., (2020) | RCT | 0 | 91 | B, F, K | W | 1–5 | ASD, EBD, HI, ID, OHI, SLD, SLI |
| McMaster et al. (forthcoming) | RCT | 0 | 91 | B, F, K | W | 1–6 | ASD, EBD, ID, OHI, SLD, SLI, TBI |
| Wackerle-Hollman (2009) | RCT | 0, 2 | 86 | B | R | Pre-K | Inc, NR |
| Wesson (1990) | RCT | 0 | 81 | F, S | R | 2–7 | EBD, ID, SLD |
Note. ASD = autism spectrum disorder; B = beliefs; Diff = students with academic difficulties, without disabilities; EBD = emotional/behavioral disorder; F = fidelity; HI = hearing impairment; ID = intellectual disability; Inc, NR = students with disabilities included, but disability not reported; K = knowledge; M = math; O = orthopedic impairment; Pre-K = pre-kindergarten; R = reading; S = skills; TBI = traumatic brain injury; W = writing.
Comparison conditions, 0 = BAU, 1 = DBI without PD, 2 = DBI with training, 3 = DBI with less extensive ongoing support than another treatment condition.
Effects of Ongoing Support for DBI
Of the total 26 studies, 16 studies with 19 treatments examined the effects of ongoing DBI support on teacher outcomes (k = 46), and 22 studies with 30 treatments examined effects on student achievement (k = 103). Meta-analyses indicated that the weighted mean effect size for DBI with ongoing support was g = 0.86 for teachers (95% CI = [0.43, 1.28], I2 = 83.74%, p < .001) and g = 0.31 for students (95% CI = [0.19, 0.42], I2 = 61.38%, p < .001).
Characteristics of Ongoing Support for DBI
Studies included 35 eligible treatment conditions (see Table 2). All but two treatments had initial training (L. S. Fuchs et al., 1984, 1989c). On average, teachers received 4.70 hr of ongoing support (SD = 3.02) across 28 treatments with sufficient information. Twenty-five treatments included computerized DBDM support alongside researcher support (74.3%). In two treatments, researchers facilitated support with educator peer groups (McCullum, 1999; Wesson, 1990).
Table 2.
Ongoing Support of Data-Based Individualization Characteristics.
| Study and treatment | Type | Support hours | Frequency in weeks | DBI components targeted | Support practices |
|---|---|---|---|---|---|
| Allinder (1996) | Comp. DBDM | 3.4 | 2.5 | PM, G, GI | Description; performance feedback; planning; problem-solving, expert |
| Allinder & Beckbest (1995) | Comp. DBDM | 1.7 | 3.4 | PM, G, GI, IC | Performance feedback; planning; problem-solving, expert; Q&A |
| Allinder et al. (2000) | |||||
| CBM self-monitoring | Comp. DBDM | 3.3 | 2.0 | PM, G, GI, IC | Planning; problem-solving, expert; self-monitoring |
| CBM alone | Comp. DBDM | 3.3 | 2.0 | PM, G, GI, IC | Planning; problem-solving, expert |
| Ergul (2007) | Person | 12.0 | 4.0 | I, PM, GI, IC | Modeling; performance feedback; planning; self-monitoring |
| Förster et al. (2018) | Person | NR | 8.5 | I, PM, GI | Problem-solving, collaborative |
| L. S. Fuchs (1988) | Comp. DBDM | 5.7 | 1.0 | G, GI | Problem-solving, collaborative; problem-solving, expert |
| L. S. Fuchs & Fuchs (1990) | |||||
| CBM performance indicator | Comp. DBDM | 3.8 | 2.0 | PM, G, GI, IC | Problem-solving, collaborative; problem-solving, expert |
| Performance indicator and skills analysis | Comp. DBDM | 3.8 | 2.0 | PM, G, GI | Planning; problem-solving, collaborative; problem-solving, expert |
| L. S. Fuchs & Fuchs (1991) | |||||
| CBM expert system | Comp. DBDM | 6.0 | 1.5 | PM, G, GI, IC | Planning; problem-solving, collaborative; problem-solving, expert |
| CBM no expert system | Comp. DBDM | 6.0 | 1.5 | PM, G, GI | Problem-solving, collaborative; problem-solving, expert |
| L. S. Fuchs & Fuchs (1993) | Comp. DBDM | 5.7 | 1.0 | G, GI | Problem-solving, collaborative; problem-solving, expert |
| L. S. Fuchs et al. (1984) | Person | NR | 1.0 | GI | Description |
| L. S. Fuchs et al. (1987) | Comp. DBDM | 5.7 | 0.9 | G, GI | Problem-solving, expert |
| L. S. Fuchs et al. (1988) | Comp. DBDM | 2.3 | 2.0 | PM, G, GI | Problem-solving, collaborative; problem-solving, expert |
| L. S. Fuchs et al. (1989a) | |||||
| CBM enhanced | Comp. DBDM | 2.3 | 2.0 | G, GI | Performance feedback; problem-solving, collaborative; problem-solving, expert |
| CBM unenhanced | Comp. DBDM | 2.3 | 2.0 | G, GI | Problem-solving, collaborative; problem-solving, expert |
| L. S. Fuchs et al. (1989b) | Comp. DBDM | 2.3 | 2.0 | PM, G, GI | Problem-solving, collaborative; problem-solving, expert |
| L. S. Fuchs et al. (1989c) | Comp. DBDM | NR | 2.3 | PM, G, GI, IC | Planning; problem-solving, expert |
| L. S. Fuchs et al. (1990) | |||||
| CBM skills analysis | Comp. DBDM | 1.9 | NR | PM, G, GI, IC | Planning; problem-solving, expert |
| CBM visual inspection | Comp. DBDM | 1.9 | NR | PM, G, GI, IC | Planning; problem-solving, expert |
| L. S. Fuchs, Fuchs, Hamlett and Allinder (1991) | |||||
| CBM expert system | Comp. DBDM | 4.7 | 1.5 | PM, G, GI, IC | Planning; problem-solving, collaborative; problem-solving, expert |
| CBM no expert system | Comp. DBDM | 5.2 | 1.5 | PM, G, GI, IC | Planning; problem-solving, collaborative; problem-solving, expert |
| L. S. Fuchs, Fuchs, Hamlett, & Stecker (1991) a | |||||
| CBM skills analysis | Comp. DBDM | NR | 2.5 | PM, G, GI, IC | Planning; problem-solving, collaborative; problem-solving, expert |
| CBM no skills analysis | Comp. DBDM | NR | 2.5 | PM, G, GI | Problem-solving, collaborative; problem-solving, expert |
| L. S. Fuchs et al. (1992) | |||||
| CBM expert system | Comp. DBDM | 4.8 | 1.5 | PM, G, GI, IC | Planning; problem-solving, collaborative; problem-solving, expert |
| CBM no expert system | Comp. DBDM | 5.1 | 1.5 | PM, G, GI, IC | Problem-solving, collaborative; problem-solving, expert |
| L. S. Fuchs et al. (1995) | Comp. DBDM | 3.5 | 1.5 | I, PM, G, GI, IC | Performance feedback; planning; problem-solving, expert |
| Mathes et al. (1998) | Person | NR | 1.0 | PM, G, GI | Description; planning |
| McCullum (1999) | Person | 4.5 | 4.0 | PM, G, GI | Planning; problem-solving, collaborative |
| McMaster, Lembke, et al. (2020) | Person | 7.7 | 2.0 | I, PM, GI, IC | Description; performance feedback; planning; positive reinforcement; problem-solving, collaborative; Q&A |
| McMaster et al. (2024) | Person | 15.5 | 2.0 | I, PM, GI, IC | Description; performance feedback; planning; positive reinforcement; problem-solving, collaborative; Q&A |
| Wackerle-Hollman (2009) | Person | 2.8 | 4.0 | PM, GI | Problem-solving, collaborative |
| Wesson (1990) | |||||
| CBM group follow-up | Person | 4.8 | 4.0 | PM, GI | Planning; problem-solving, collaborative |
| CBM individual follow-up | Person | NR | 4.0 | PM, GI | Performance feedback; planning; problem-solving, collaborative |
Note. Comp. DBDM = computerized data-based decision making plus support provided by a person; NR = not reported; I = instruction; PM = progress monitoring; G = graphing; GI = graph interpretation or decision rule application; IC = instructional changes.
Study has the same teacher participants as L. S. Fuchs et al. (1990).
Researchers often employed support practices for multiple steps of the DBI process or the DBI process in general. These supports included expert problem-solving via computerized DBDM (n = 25), collaborative problem-solving around DBI implementation issues (n = 23), descriptions of DBI (Allinder, 1996; L. S. Fuchs et al., 1984; Mathes et al., 1998; McMaster et al., forthcoming; McMaster, Lembke, et al., 2020) opportunities to ask DBI-related questions (Allinder & Beckbest, 1995; McMaster et al., forthcoming; McMaster, Lembke, et al., 2020), and positive feedback related to DBI use (McMaster et al., forthcoming; McMaster, Lembke, et al., 2020). In the following sections, we describe how researchers reported providing ongoing support for components of DBI.
Five treatments targeted teachers’ implementation of effective instruction (Ergul, 2007; Förster et al., 2018; L. S. Fuchs et al., 1995; McMaster et al., forthcoming; McMaster, Lembke, et al., 2020). Each treatment included an instructional program. These programs consisted of researcher-created literacy activities that authors aligned (Förster et al., 2018; McMaster et al., forthcoming; McMaster, Lembke, et al., 2020) or did not align (Ergul, 2007) with research-based practices, or Peer-Assisted Learning Strategies (Förster et al., 2018; L. S. Fuchs et al., 1995), an evidence-based practice (Mathes, 2012). In three treatments, teachers provided instruction in small groups or individually (Ergul, 2007; L. S. Fuchs et al., 1995; McMaster, Lembke, et al., 2020). During ongoing support, members of the research team modeled instruction (Ergul, 2007) and provided performance feedback related to instructional fidelity or quality (Ergul, 2007; L. S. Fuchs et al., 1995; McMaster et al., forthcoming; McMaster, Lembke, et al., 2020)
In most treatments, researchers supported progress monitoring (n = 29, 83%). Data collection and scoring were often done for teachers, either by a computer program (n = 21) or with assistance from a member of the research team (Ergul, 2007; McCullum, 1999; Wackerle-Hollman, 2009). Treatments also included support for graphing (n = 27, 77%), either via computerized DBDM (n = 26) or a researcher graphing data for teachers (McCullum, 1999). Other treatments provided teachers with graphing spreadsheets without ongoing support (McMaster et al., forthcoming; McMaster, Lembke, et al., 2020; Wackerle-Hollman, 2009).
All treatments included support for graph interpretation. Researchers often used computerized DBDM to automate the process (n = 25), but still discussed graph interpretation during meetings with teachers (n = 17). The support practices used in these discussions were not clear. In some treatments, researchers provided graph interpretation performance feedback (Allinder, 1996; Allinder & Beckbest, 1995; L. S. Fuchs et al., 1995). Graph interpretation resources included self-directed tools, such as self-monitoring checklists, recording forms, and decision rule application flowcharts (Allinder, 1996; Ergul, 2007; McCullum, 1999; McMaster et al., forthcoming; McMaster, Lembke, et al., 2020; Wesson, 1990).
Less than half of treatments included support for instructional changes (n = 16, 46%); 14 studies included at least one treatment that supported this DBI component (54%). Most of this support was aimed at assisting teachers in planning changes. Some computerized DBDM programs organized student mastery data for teachers (e.g., Allinder & Beckbest, 1995; n = 11). Treatments without computerized DBDM gave teachers self-monitoring or self-directed tools to generate hypotheses on students’ lack of progress and choose instructional changes (Allinder et al., 2000; Ergul, 2007; L. S. Fuchs et al., 1995; McMaster et al., forthcoming; McMaster, Lembke, et al., 2020). In L. S. Fuchs and Fuchs (1991), L. S. Fuchs, Fuchs, Hamlett, and Allinder (1991), and L. S. Fuchs et al. (1992), a computerized DBDM program asked teachers questions about their students and the instructional context. Based on these responses, the computer recommended an instructional change and provided guidance on implementing the change. Researchers often only specified planning as a support strategy, but Ergul (2007) gave performance feedback on teachers’ instructional change selection process.
In some cases, researchers extended instructional change support further by providing individualized instructional materials. L. S. Fuchs et al. (1989c) and L. S. Fuchs, Fuchs, Hamlett and Stecker (1991) provided premade instructional packets that aligned with teachers’ instructional decisions. McMaster et al.’s (forthcoming) and McMaster, Lembke, et al.’s (2020) instructional program included a menu of lesson activities and materials that teachers selected and used to change lesson content. Researchers did not specify support practices for instructional change implementation.
Moderators of Ongoing DBI Support Effects
For moderator analyses, see Table 3. In the teacher meta-regression model, instructional change support was associated with a nonsignificant, marginal decrease (β = –0.004, 95% CI = [–0.44, 0.43], p = .97). The I2 of 57.65% indicated that heterogeneity was reduced but remained substantial. In the student model, support for teachers’ instructional changes was related to a nonsignificant increase (β = 0.03, 95% CI = [–0.28, 0.34], p = .85). Computerized DBDM was associated with a nonsignificant decrease (β = –0.01, 95% CI = [–0.24, 0.22], p = .97). The I2 of 58.36% indicated that heterogeneity remained substantial after the inclusion of moderators.
Table 3.
Results From Meta-Regression Moderator Analyses.
| Moderator | Teacher outcomes | Student outcomes |
|---|---|---|
| DBI LE support comparison condition | –1.57 (0.290)** | –0.27 (0.213) |
| Instructional change support | –0.004 (0.225) | 0.03 (0.157) |
| Person + Comp. DBDM support | –0.01 (0.118) | |
| Intercept | 1.84 (0.321)** | 0.35 (0.15) |
| Effect sizes (k) | 46 | 103 |
| Studies (n) | 16 | 22 |
| I 2 | 57.65% | 58.36% |
Note. Effect sizes and standard errors reflect robust variance estimates. All estimates have ≥ 4 degrees of freedom. DBI = data-based individualization; LE = less extensive (training only or less extensive ongoing support). DBDM = data-based decision making.
p < .05. **p < .01. ***p < .001.
Subset Analyses
For the results of all subset analyses, see Table 4. Ongoing support targeting instructional changes had a significant, positive subset effect (g = 1.09, 95% CI = [0.59, 1.59], p < .01); support without this target did not (g = 0.57, 95% CI = [–0.07, 1.21], p = .15). Computerized DBDM support had a significant, positive effect (g = 0.73, 95% CI = [0.23, 1.23], p < .05); person support had a nonreliable effect (df < 4; Tipton, 2015). Support with collaborative problem-solving had a significant, positive effect (g = 1.01, 95% CI = [0.58, 1.44], p < .01); support without this practice had a nonreliable effect. In terms of outcomes, only the effect on teachers’ DBI skills (g = 1.10, 95% CI = [1.46, 2.44], p < .01) was reliable.
Table 4.
Subset Variable Effect Size Analyses.
| Variable | Teacher effects | Student effects | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| k | n | g | 95% CI | df | I 2 | k | n | g | 95% CI | df | I 2 | |
| Comparison condition | ||||||||||||
| BAU control | 17 | 6 | 1.85*** | [1.44, 2.26] | 4.57 | 76.35% | 62 | 14 | 0.38** | [0.19, 0.57] | 7.14 | 61.50% |
| DBI without PD | 0 | 0 | 15 | 2 | 0.33 | [−0.32, 0.98] | 1.0 | 49.75% | ||||
| DBI + training | 2 | 1 a | 0.98** | [0.36, 1.61] | NA | NA | 8 | 2 | −0.15 | [−0.43, 0.14] | 1.0 | <0.01% |
| DBI + LE ongoing support | 27 | 12 | 0.21* | [0.05, 0.37] | 6.93 | 30.57% | 18 | 9 | 0.25 | [−0.24, 0.74] | 4.8 | 43.87% |
| Support for inst. changes | ||||||||||||
| Yes | 25 | 8 | 1.09** | [0.59, 1.59] | 5.80 | 83.94% | 73 | 11 | 0.30** | [0.17, 0.42] | 5.37 | 53.66% |
| No | 21 | 8 | 0.57 | [−0.07, 1.21] | 4.63 | 80.80% | 30 | 11 | 0.31 | [0.05, 058] | 5.75 | 72.08% |
| Ongoing support type | ||||||||||||
| Person | 14 | 5 | 1.14 | [0.28, 2.00] | 3.42 | 88.17% | 46 | 14 | 0.33* | [0.17, 0.50] | 3.98 | 71.51% |
| Comp. DBDM | 32 | 11 | 0.73* | [0.23, 1.23] | 6.98 | 79.81% | 50 | 8 | 0.27* | [0.10, 0.43] | 5.14 | 38.12% |
| Collaborative PS | ||||||||||||
| Yes | 31 | 12 | 1.01** | [0.58, 1.44] | 8.36 | 81.14% | 78 | 14 | 0.31** | [0.18, 0.44] | 6.18 | 54.67% |
| No | 15 | 4 | 0.53 | [−0.31, 1.37] | 2.38 | 85.44% | 25 | 8 | 0.30 | [0.06, 0.54] | 4.16 | 66.49% |
| Outcome | ||||||||||||
| Teacher beliefs | 8 | 5 | 0.58 | [0.07, 1.08] | 3.75 | 71.76% | ||||||
| Teacher fidelity | 11 | 6 | 0.32 | [−0.15, 0.78] | 3.80 | 72.27% | ||||||
| Teacher skills | 25 | 9 | 1.10** | [1.46, 2.44] | 6.49 | 83.21% | ||||||
| Teacher knowledge | 2 | 2 a | 2.22*** | [1.20, 3.25] | NA | NA | ||||||
| Student math | 56 | 8 | 0.32* | [0.18, 0.45] | 3.25 | 50.31% | ||||||
| Student reading | 28 | 9 | 0.30* | [0.08, 0.52] | 4.89 | 62.37% | ||||||
| Student writing | 25 | 7 | 0.30 | [−0.03, 0.62] | 3.37 | 71.97% | ||||||
Note. Effect sizes and standard errors reflect robust variance estimations (RVEs). Subset effects with df < 4 are not reliably estimated. LE = less extensive; Inst. = instruction; k = number of effect sizes; n = number of studies; PS = problem-solving.
Subset does not have RVE because it only includes one study or one effect per study.
p < .05. **p < .01. ***p < .001.
For students, ongoing support was effective when instructional changes were targeted (g = 0.30, 95% CI = [0.17, 0.42], p < .01); support without this target was not (g = 0.31, 95% CI = [0.05, 0.58], p = .06). Computerized DBDM support had a significant positive effect (g = 0.27, 95% CI = [0.10, 0.43], p < .05); the effect of person support was not reliable. Support that included collaborative problem-solving had a significant positive effect (g = 0.31, 95% CI = [0.18, 0.44], p < .01); support without this practice did not (g = 0.30, 95% CI = [0.05, 0.54], p = .07). Only the effect on reading outcomes was reliable (g = 0.30, 95% CI = [0.08, 0.52], p < .05).
Discussion
The purpose of this synthesis was to evaluate the effects of ongoing support for DBI on teachers’ knowledge, skills, beliefs, and fidelity related to DBI, as well as their students’ academic outcomes. We also aimed to describe ongoing support protocols from the research literature. Ongoing DBI support had a positive effect on teacher DBI outcomes (g = 0.82, n = 15 studies) and student academic outcomes (g = 0.35, n = 21 studies). Across 26 studies, ongoing support typically targeted graph interpretation and employed collaborative problem-solving. Fewer studies supported instructional changes. Instructional change support and computerized DBDM support did not moderate teacher or student effects. However, descriptively speaking, subset analyses indicated that support was effective for both teachers and students if it targeted instructional changes, and effective for students if it also included collaborative problem-solving.
Effects of Ongoing DBI Support on Teacher and Student Outcomes
The positive effect of ongoing support on teachers’ DBI-related outcomes and students’ academic achievement is consistent with general PD theory, and theory specific to DBI. Professional development over a sustained duration is necessary for teachers to build knowledge, change beliefs, and effectively apply what they have learned into practice with students (Desimone, 2009; Joyce & Showers, 1981). Lembke et al. (2018) also posited that ongoing support for DBI indirectly impacts student outcomes through improvements in teachers’ knowledge, skills, beliefs, and fidelity. These findings are consistent with a meta-analysis of the effects of DBDM PD on teachers’ knowledge, skills, self-efficacy, and fidelity (Gesel et al., 2021). Furthermore, they build on two previous syntheses of the effects of DBI on student outcomes (Jung et al., 2018; Stecker et al., 2005) by specifying the value of ongoing support for teachers’ use of DBI and a range of students who would benefit from individualized instruction. Ongoing support may be particularly important for DBI, given evidence that teachers struggle to use CBM data to make instructional decisions and face barriers to sustaining DBI in schools (e.g., (Shanahan et al., 2024; van den Bosch et al., 2017).
We were unable to examine all moderators of interest—support for instructional changes, computerized DBDM, and collaborative problem-solving—due to a small number of studies, and the included moderators were not significant. More time in ongoing support may have been needed for variations in ongoing support characteristics to produce varying teacher and student outcomes, as studies provided relatively brief ongoing support in terms of total hours. However, consistent with Jung et al. (2018), our subset analyses suggested that some aspects of ongoing support had descriptively different teacher- and student-level effects.
Positive teacher and student subset effects from ongoing support targeting teachers’ instructional changes suggest that a content focus on this challenging aspect of DBDM (e.g., van den Bosch et al., 2017) could support teachers’ use of DBI. Teachers may have benefited from support in deciding which changes to make and how to implement them because this process requires teachers to synthesize knowledge about students, instruction, and context in novel ways. The positive subset effect of collaborative problem-solving on student achievement is consistent with PD theory emphasizing the importance of active learning and teacher input (Desimone, 2009; Idol et al., 1995; Lembke et al., 2018). This finding is also consistent with previous research indicating that collaborative training leads to improved teacher and student outcomes in and outside the context of DBI (Filderman et al., 2021; Jung et al., 2018).
Limitations
Before considering the implications of these findings for research, it is important to note several limitations of this synthesis. First, the summary effect size for teacher outcomes may have been inflated due to publication bias. Second, substantial heterogeneity across studies could not be explained by the included moderators, suggesting that unexplored study characteristics influenced effects. For example, previous PD meta-analyses have examined associations between PD dosage and effects (e.g., Brock & Carter, 2017); unfortunately, there were too few studies in this synthesis to explore more moderators.
Third, almost all studies did not provide validity evidence for their teacher outcome measures. The teacher effect size must therefore be interpreted with caution, and research is needed to develop adequate measures of DBI knowledge, skills, beliefs, and fidelity. Fourth, we collapsed outcome measures into two broad constructs: teacher and student outcomes. Theoretically, various DBI-related teacher outcomes are conceptually distinct and impact students’ progress directly or indirectly (Lembke et al., 2018). Collapsing these teacher outcomes into one construct, and including measures of teachers’ attitudes toward interventions, may have contributed to the substantial heterogeneity of effects (Gesel et al., 2021). Continued DBI research is needed for future meta-analyses to provide meaningful estimates of teacher effects. In particular, synthesized effects of ongoing support on DBI fidelity may be particularly useful in designing future PD (Shanahan et al., 2023).
Implications for Research
Data-based individualization researchers can optimize ongoing support protocols by testing variations and extensions of what teachers received in studies from this synthesis. Previously, researchers have automated progress monitoring, graphing, and graph interpretation. They have also provided teachers with self-guided or automated DBI tools that support teachers’ synthesis of information to select and plan instructional changes. Some studies also provided individualized instructional materials to implement these changes. Researchers, and in a few instances, educator peers, engaged teachers in collaborative-problem-solving around general DBI-related issues. In the future, researchers may consider comparing the effects of various support practices and describing protocols in greater detail, given evidence that how content is learned can influence outcomes (e.g., Brock et al., 2017; Fallon et al., 2018). In addition, few studies provided research-based, standardized intervention programs that teachers need as a platform for individualization (L. S. Fuchs et al., 2021; NCII, 2013). Providing teachers with these interventions may act as not only an additional teacher support but also a means to increase student outcomes.
Studies often did not include treatments with support for instructional changes, which is an issue that must be addressed in future DBI research. Our finding is consistent with a recent synthesis indicating that DBI PD materials often do not address this topic (Espin, van den Bosch, et al., 2021). Individualizing instruction is the essence of DBI—without it, students with significant learning difficulties would continue to receive standardized intervention protocols, which may not be effective for them (Al Otaiba & Fuchs, 2006). Thus, ongoing support aimed at improving teachers’ interpretation of graphs and their use of that interpretation in practice may have greater impacts on students. This hypothesis is tentatively supported by the significant subset effect of ongoing support for instructional changes on student achievement from this synthesis.
Computerized DBDM did not moderate effects, and descriptive comparisons of support with or without these programs could not be made due to nonreliable effect estimates. Yet, there are meaningful, practical reasons to continue to investigate its utility. Coaching, consultation, and follow-up trainings have substantial costs (Knight, 2012). These programs reduce the resources needed for widespread DBI implementation (L. S. Fuchs et al., 2021) and, based on protocols in this synthesis, can support teachers with a variety of DBI components. These programs were studied during the late 1980s and early 1990s (e.g., L. S. Fuchs et al., 1989a, 1992). By harnessing current technology, researchers can update these programs to include active learning opportunities (e.g., performance feedback on teachers’ instructional change planning; Ergul, 2007), opportunities to collaborate with other teachers implementing DBI (e.g., Wesson, 1990), or guidance to solve implementation issues. In addition, artificial intelligence could index a variety of evidence-based practices to support increased individualization (Cardona et al., 2023; L. S. Fuchs et al., 2021).
Conclusion
The findings of this synthesis suggest that ongoing support can effectively equip teachers to implement DBI for their students with disabilities or learning difficulties. Although we found positive subset effects for support that targeted instructional changes and included collaborative problem-solving, these characteristics did not moderate overall effects. Given that researchers often did not include support for teachers’ data-based instructional change implementation or provide detailed descriptions of ongoing support practices, continued efforts to optimize DBI support in terms of efficiency and impact, which may include computerized DBDM, are needed. By doing so, future syntheses may be better able to identify the features of effective DBI PD, ultimately increasing the likelihood of DBI’s success in practice.
Supplemental Material
Supplemental material, sj-docx-1-ldx-10.1177_00222194241271335 for Ongoing Teacher Support for Data-Based Individualization: A Meta-Analysis and Synthesis by Emma Shanahan, Seohyeon Choi, Jechun An, Bess Casey-Wilke, Seyma Birinci, Caroline Roberts and Emily Reno in Journal of Learning Disabilities
Supplemental material, sj-docx-2-ldx-10.1177_00222194241271335 for Ongoing Teacher Support for Data-Based Individualization: A Meta-Analysis and Synthesis by Emma Shanahan, Seohyeon Choi, Jechun An, Bess Casey-Wilke, Seyma Birinci, Caroline Roberts and Emily Reno in Journal of Learning Disabilities
Supplemental material, sj-xlsx-3-ldx-10.1177_00222194241271335 for Ongoing Teacher Support for Data-Based Individualization: A Meta-Analysis and Synthesis by Emma Shanahan, Seohyeon Choi, Jechun An, Bess Casey-Wilke, Seyma Birinci, Caroline Roberts and Emily Reno in Journal of Learning Disabilities
Footnotes
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding: The research reported here was supported in part by the Institute of Education Sciences, U.S. Department of Education, through Grant R324B200012 to the University of Texas. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education.
ORCID iDs: Emma Shanahan
https://orcid.org/0000-0002-5594-4741
Seohyeon Choi
https://orcid.org/0000-0003-1721-4956
Jechun An
https://orcid.org/0000-0003-1746-4154
Supplemental Material: Supplemental material for this article is available at https://doi.org/10.1177/00222194241271335
References
* Indicates that the reference is for a study included in this synthesis.
- *Allinder R. M. (1996). When some is not better than none: Effects of differential implementation of curriculum-based measurement. Exceptional Children, 62(6), 525–535. 10.1177/001440299606200604 [DOI] [Google Scholar]
- *Allinder R. M., Beckbest M. A. (1995). Differential effects of two approaches to supporting teachers’ use of curriculum-based measurement. School Psychology Review, 24(2), 287–298. 10.1080/02796015.1995.12085768 [DOI] [Google Scholar]
- *Allinder R. M., Bolling R. M., Oats R. G., Gagnon W. A. (2000). Effects of teacher self-monitoring on implementation of curriculum-based measurement and mathematics computation achievement of students with disabilities. Remedial and Special Education, 21(4), 219–226. 10.1177/074193250002100403 [DOI] [Google Scholar]
- Al Otaiba S., Fuchs D. (2006). Who are the young children for whom best practices in reading are ineffective? An experimental and longitudinal study. Journal of Learning Disabilities, 39(5), 414–431. 10.1177/00222194060390050401 [DOI] [PubMed] [Google Scholar]
- Bergan J. (1977). Behavioral consultation. Charles E. Merrill. [Google Scholar]
- Brock M. E., Cannella-Malone H. I., Seaman R. L., Andzik N. R., Schaefer J. M., Page E. J., Barczak M. A., Dueker S. A. (2017). Findings across practitioner training studies in special education: A comprehensive review and meta-analysis. Exceptional Children, 84(1), 7–26. 10.1177/0014402917698008 [DOI] [Google Scholar]
- Brock M. E., Carter E. W. (2017). A meta-analysis of educator training to improve implementation of interventions for students with disabilities. Remedial and Special Education, 38(3), 131–144. 10.1177/0741932516653477 [DOI] [Google Scholar]
- Cardona M. A., Rodríguez R. J., Ishmael K. (2023). Artificial intelligence and the future of teaching and learning. U.S. Department of Education, Office of Educational Technology. https://oet.wp.nnth.dev/ai-future-of-teaching-and-learning/
- Cook B. G., Buysse V., Klingner J., Landrum T. J., McWilliam R. A., Tankersley M., Test D. W. (2015). CEC’s standards for classifying the evidence base of practices in special education. Remedial and Special Education, 36(4), 220–234. 10.1177/0741932514557271 [DOI] [Google Scholar]
- Danielson L., Rosenquist C. (2014). Introduction to the TEC special issue on data-based individualization. TEACHING Exceptional Children, 46(4), 6–12. 10.1177/0040059914522965 [DOI] [Google Scholar]
- Deno S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52(3), 219–232. 10.1177/001440298505200303 [DOI] [PubMed] [Google Scholar]
- Deno S. L. (2014). Reflections on progress monitoring and data-based intervention. In Cook B. G., Landrum T. J., Tankersley M. (Eds.), Special education past, present, and future: Perspectives from the field (Vol. 27, pp. 171–194). Emerald Group. 10.1108/S0735-004X20140000027010 [DOI] [Google Scholar]
- Deno S. L., Mirkin P. (1977). Data-based program modification: A manual. Leadership Training Institute/Special Education, University of Minnesota. [Google Scholar]
- Desimone L. M. (2009). Improving impact studies of teachers’ professional development: Toward better conceptualizations and measures. Educational Researcher, 38(3), 181–199. 10.3102/0013189X08331140 [DOI] [Google Scholar]
- *Ergul C. (2007). Curriculum based decision making in an early literacy program [Doctoral dissertations, Arizona State University]. ProQuest Dissertations. [Google Scholar]
- Espin C. A., Förster N., Mol S. E. (2021). International perspectives on understanding and improving teachers’ data-based instruction and decision making: Introduction to the special series. Journal of Learning Disabilities, 54(4), 239–242. 10.1177/00222194211017531 [DOI] [PubMed] [Google Scholar]
- Espin C. A., van den Bosch R. M., van der Liende M., Rippe R. C. A., Beutick M., Langa A., Mol S. E. (2021). A systematic review of CBM professional development materials: Are teachers receiving sufficient instruction in data-based decision-making? Journal of Learning Disabilities, 54(4), 256–268. 10.1177/0022219421997103 [DOI] [PubMed] [Google Scholar]
- Espin C. A., Wayman M. M., Deno S. L., McMaster K. L., de Rooij M. (2017). Data-based decision-making: Developing a method for capturing teachers’ understanding of CBM graphs. Learning Disabilities Research & Practice, 32(1), 8–21. 10.1111/ldrp.12123 [DOI] [Google Scholar]
- Fallon L. M., Kurtz K. D., Mueller M. R. (2018). Direct training to improve educators’ treatment integrity: A systematic review of single-case design studies. School Psychology Quarterly, 33(2), 169–181. 10.1037/spq0000210 [DOI] [PubMed] [Google Scholar]
- Filderman M. J., Barnard-Brak L., Benner G. J. (2022). Do teacher beliefs mediate the relationship between professional development and reading outcomes of students with emotional and behavioral disorders? An exploration of effects from a randomized controlled trial. Social Psychology of Education, 25(6), 1437–1458. 10.1007/s11218-022-09731-5 [DOI] [Google Scholar]
- Filderman M. J., Toste J. R., Didion L., Peng P. (2021). Data literacy training for K–12 teachers: A meta-analysis of the effects on teacher outcomes. Remedial and Special Education, 42(5), 1–16. 10.1177/07419325211054208 [DOI]
- Filderman M. J., Toste J. R., Didion L. A., Peng P., Clemens N. H. (2018). Data-based decision-making in reading interventions: A synthesis and meta-analysis of the effects for struggling readers. The Journal of Special Education, 52(3), 174–187. 10.1177/0022466918790001 [DOI] [Google Scholar]
- *Förster N., Kawohl E., Souvignier E. (2018). Short- and long-term effects of assessment-based differentiated reading instruction in general education on reading fluency and reading comprehension. Learning and Instruction, 56, 98–109. 10.1016/j.learninstruc.2018.04.009 [DOI] [Google Scholar]
- Fuchs D., Fuchs L. S., Vaughn S. (2014). What is intensive instruction and why is it important? TEACHING Exceptional Children, 46(4), 13–18. 10.1177/0040059914522966 [DOI] [Google Scholar]
- *Fuchs L. S. (1988). Effects of computer-managed instruction on teachers’ implementation of systematic monitoring programs and student achievement. The Journal of Educational Research, 81(5), 294–304. 10.1080/00220671.1988.10885838 [DOI] [Google Scholar]
- *Fuchs L. S., Allinder R. M., Hamlett C. L., Fuchs D. (1990). An analysis of spelling curricula and teachers’ skills in identifying error types. Remedial and Special Education, 11(1), 42–52. 10.1177/074193259001100107 [DOI] [Google Scholar]
- *Fuchs L. S., Deno S. L., Mirkin P. K. (1984). The effects of frequent curriculum-based measurement and evaluation on pedagogy, student achievement, and student awareness of learning. American Educational Research Journal, 21(2), 449–460. 10.3102/00028312021002449 [DOI] [Google Scholar]
- *Fuchs L. S., Fuchs D. (1990). The role of skills analysis in curriculum-based measurement in math. School Psychology Review, 19(1), 6–22. 10.1080/02796015.1990.12087335 [DOI] [Google Scholar]
- *Fuchs L. S., Fuchs D. (1991). Effects of expert system advice within curriculum-based measurement on teacher planning and student achievement in spelling. School Psychology Review, 20(1), 49–66. 10.1080/02796015.1991.12085532 [DOI] [Google Scholar]
- *Fuchs L. S., Fuchs D. (1993). Effects of systematic observation and feedback on teachers’ implementation of curriculum-based measurement. Teacher Education and Special Education, 16(2), 178–187. 10.1177/088840649301600210 [DOI] [Google Scholar]
- *Fuchs L. S., Fuchs D., Hamlett C. L. (1989. a). Computers and curriculum-based measurement: Effects of teacher feedback systems. School Psychology Review, 18, 112–125. 10.1080/02796015.1989.12085405 [DOI] [Google Scholar]
- *Fuchs L. S., Fuchs D., Hamlett C. L. (1989. b). Effects of alternative goal structures within curriculum-based measurement. Exceptional Children, 55(5), 429–438. 10.1177/001440298905500506 [DOI] [Google Scholar]
- *Fuchs L. S., Fuchs D., Hamlett C. L. (1989. c). Monitoring reading growth using student recalls: Effects of two teacher feedback systems. The Journal of Educational Research, 83(2), 103–110. 10.1080/00220671.1989.10885938 [DOI] [Google Scholar]
- *Fuchs L. S., Fuchs D., Hamlett C. L., Allinder R. M. (1991). The contribution of skills analysis to curriculum-based measurement in spelling. Exceptional Children, 57(5), 443–452. 10.1177/001440299105700507 [DOI] [Google Scholar]
- *Fuchs L. S., Fuchs D., Hamlett C. L., Ferguson C. (1992). Effects of expert system consultation within curriculum-based measurement, using a reading maze task. Exceptional Children, 58(5), 436–450. 10.1177/001440299205800507 [DOI] [Google Scholar]
- *Fuchs L. S., Fuchs D., Hamlett C. L., Hasselbring T. S. (1987). Using Computers with Curriculum-Based Monitoring: Effects on Teacher Efficiency and Satisfaction. Journal of Special Education Technology, 8(4), 14–27. 10.1177/016264348700800402 [DOI] [Google Scholar]
- *Fuchs L. S., Fuchs D., Hamlett C. L., Phillips N. B., Karns K. (1995). General educators’ specialized adaptation for students with learning disabilities. Exceptional Children, 61(5), 440–459. 10.1177/001440299506100504 [DOI] [Google Scholar]
- *Fuchs L. S., Fuchs D., Hamlett C. L., Stecker P. M. (1991). Effects of curriculum-based measurement and consultation on teacher planning and student achievement in mathematics operations. American Educational Research Journal, 28(3), 617–641. 10.3102/00028312028003617 [DOI] [Google Scholar]
- Fuchs L. S., Fuchs D., Hamlett C. L., Stecker P. M. (2021). Bringing data-based individualization to scale: A call for the next-generation technology of teacher supports. Journal of Learning Disabilities, 54(5), 319–333. 10.1177/0022219420950654 [DOI] [PubMed] [Google Scholar]
- Fuchs L. S., Fuchs D., Malone A. S. (2017). The taxonomy of intervention intensity. TEACHING Exceptional Children, 50(1), 194–202. 10.1177/0040059918758166 [DOI] [Google Scholar]
- *Fuchs L. S., Hamlett C. L., Fuchs D., Stecker P. M., Ferguson C. (1988). Conducting curriculum-based measurement with computerized data collection: Effects on efficiency and teacher satisfaction. Journal of Special Education Technology, 9(2), 73–86. 10.1177/016264348800900202 [DOI] [Google Scholar]
- Gesel S. A., LeJeune L. M., Chow J. C., Sinclair A. C., Lemons C. J. (2021). A meta-analysis of the impact of professional development on teachers’ knowledge, skill, and self-efficacy in data-based decision-making. Journal of Learning Disabilities, 54(4), 269–283. 10.1177/0022219420970196 [DOI] [PubMed] [Google Scholar]
- Hedges L. V., Tipton E., Johnson M. C. (2010). Robust variance estimation in meta-regression with dependent effect size estimates. Research Synthesis Methods, 1(1), 39–65. 10.1002/jrsm.5 [DOI] [PubMed] [Google Scholar]
- Higgins J. P. T., Thomas J., Chandler J., Cumpston M., Li T., Page M. J., Welch V. A. (2019). Cochrane handbook for systematic reviews of interventions. Wiley. [Google Scholar]
- Higgins J. P. T., Thompson S. G., Deeks J. J., Altman D. G. (2003). Measuring inconsistency in meta-analyses. BMJ, 327(7414), 557–560. 10.1136/bmj.327.7414.557 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Idol L., Paolucci-Whitcomb P., Nevin A. (1995). The collaborative consultation model. Journal of Educational and Psychological Consultation, 6(4), 329–346. 10.1207/s1532768xjepc0604_3 [DOI] [Google Scholar]
- Joyce B. R., Showers B. (1981). Transfer of training: The contribution of “coaching.” Journal of Education, 163(2), 163–172. 10.1177/002205748116300208 [DOI] [Google Scholar]
- Jung P.-G., McMaster K. L., Kunkel A., Shin J., Stecker P. M. (2018). Effects of data-based individualization for students with intensive learning needs: A meta-analysis. Learning Disabilities Research & Practice, 33(3), 144–155. 10.1111/ldrp.12172 [DOI] [Google Scholar]
- Kearns D. M., Walker M. A., Borges J. C., Duffy M. E. (2022). Can reading practitioners and researchers improve intensive reading support systems in a large urban school system? Journal of Research in Reading, 45(3), 488–516. 10.1111/1467-9817.12406 [DOI] [Google Scholar]
- Klingner J. K., Boardman A. G., McMaster K. L. (2013). What does it take to scale up and sustain evidence-based practices? Exceptional Children, 79(3), 195–211. 10.1177/001440291307900205 [DOI] [Google Scholar]
- Knight D. S. (2012). Assessing the cost of instructional coaching. Journal of Education Finance, 38(1), 52–80. https://www.jstor.org/stable/23259121 [Google Scholar]
- Kretlow A. G., Bartholomew C. C. (2010). Using coaching to improve the fidelity of evidence-based practices: A review of studies. Teacher Education and Special Education, 33(4), 279–299. 10.1177/0888406410371643 [DOI] [Google Scholar]
- Lembke E. S., McMaster K. L., Smith R. A., Allen A., Brandes D., Wagner K. (2018). Professional development for data-based instruction in early writing: Tools, learning, and collaborative support. Teacher Education and Special Education: The Journal of the Teacher Education Division of the Council for Exceptional Children, 41(2), 106–120. 10.1177/0888406417730112 [DOI] [Google Scholar]
- Lemons C. J., Sinclair A. C., Gesel S., Gandhi A. G., Danielson L. (2019). Integrating intensive intervention into special education services: Guidance for special education administrators. Journal of Special Education Leadership, 32(1), 29–38. https://eric.ed.gov/?id=EJ1274929 [Google Scholar]
- Mathes P. (2012). Peer-Assisted Learning Strategies. Institute of Education Sciences, U.S. Department of Education. [Google Scholar]
- *Mathes P. G., Fuchs D., Roberts P. H., Fuchs L. S. (1998). Preparing students with special needs for reintegration: Curriculum-based measurement’s impact on transenvironmental programming. Journal of Learning Disabilities, 31(6), 615–624. 10.1177/002221949803100613 [DOI] [PubMed] [Google Scholar]
- *McCullum N. (1999). Peer collaboration for instructional decision-making within curriculum-based measurement in reading [Doctoral dissertations, University of Oregon]. ProQuest Dissertations. [Google Scholar]
- McMaster K. L., Baker K., Donegan R., Hugh M., Sargent K. (2020). Professional development to support teachers’ implementation of intensive reading intervention: A systematic review. Remedial and Special Education, 42(5), 1–14. 10.1177/0741932520934099 [DOI] [Google Scholar]
- *McMaster K. L., Lembke E. S., Shanahan E., Choi S., An J., Schatschneider C., Duesenberg-Marshall M., Birinci S., McCollom E., Garman C., Moore K. (Forthcoming). Supporting teachers’ data-based individualization of early writing instruction: An efficacy trial. Journal of Learning Disabilities. [DOI] [PubMed] [Google Scholar]
- *McMaster K. L., Lembke E. S., Shin J., Poch A. L., Smith R. A., Jung P.-G., Allen A. A., Wagner K. (2020). Supporting teachers’ use of data-based instruction to improve students’ early writing skills. Journal of Educational Psychology, 112(1), 1–21. 10.1037/edu0000358 [DOI] [Google Scholar]
- National Center on Intensive Intervention. (2013). Data-based individualization: A framework for intensive intervention. American Institutes for Research. [Google Scholar]
- Ouzzani M., Hammady H., Fedorowicz Z., Elmagarmid A. (2016). Rayyan—A web and mobile app for systematic reviews. Systematic Reviews, 5(1), 210. 10.1186/s13643-016-0384-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Powell S. R., Lembke E. S., Ketterlin-Geller L. R., Petscher Y., Hwang J., Bos S. E., Cox T., Hirt S., Mason E. N., Pruitt-Britton T., Thomas E., Hopkins S. (2021). Data-based individualization in mathematics to support middle school teachers and their students with mathematics learning difficulty. Studies in Educational Evaluation, 69, 100897. 10.1016/j.stueduc.2020.100897 [DOI] [Google Scholar]
- Shanahan E. (2023). Effects of data-based writing instruction on the reading outcomes of elementary students with writing difficulties [Ph.D., University of Minnesota]. ProQuest Dissertations.
- Shanahan E., Birinci S., Alghamdi A., Reno E., Lembke E., McMaster K. (2024). Sustained use of data-based writing instruction before and during the COVID-19 pandemic. Journal of Educational Psychology. 10.1037/edu0000864 [DOI]
- Shanahan E., McMaster K. L., Bresina B. C., McKevett N. M., Choi S., Lembke E. S. (2023). Teacher predictors of student progress in data-based writing instruction: Knowledge, skills, beliefs, and instructional Fidelity. Journal of Learning Disabilities, 56(6), 440–452. 10.1177/00222194231157720 [DOI] [PubMed] [Google Scholar]
- Stecker P. M., Fuchs L. S., Fuchs D. (2005). Using curriculum-based measurement to improve student achievement: Review of research. Psychology in the Schools, 42(8), 795–819. 10.1002/pits.20113 [DOI] [Google Scholar]
- Swain K. D., Hagaman J. L. (2020). Elementary special education teachers’ use of CBM data: A 20-year follow-up. Preventing School Failure: Alternative Education for Children and Youth, 64(1), 48–54. 10.1080/1045988X.2019.1678009 [DOI] [Google Scholar]
- Tipton E. (2015). Small sample adjustments for robust variance estimation with meta-regression. Psychological Methods, 20(3), 375–393. 10.1037/met0000011 [DOI] [PubMed] [Google Scholar]
- van den Bosch R. M., Espin C. A., Chung S., Saab N. (2017). Data-based decision-making: Teachers’ comprehension of curriculum-based measurement progress-monitoring graphs. Learning Disabilities Research & Practice, 32(1), 46–60. 10.1111/ldrp.12122 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Viechtbauer W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36(3), 1–48. 10.18637/jss.v036.i03 [DOI] [Google Scholar]
- *Wackerle-Hollman A. K. (2009). The effects of progress monitoring and consultation on emergent literacy performance as measured by the Individual Growth and Development Indicators [Doctoral dissertations, University of Minnesota]. ProQuest Dissertations. [Google Scholar]
- *Wesson C. L. (1990). Curriculum-based measurement and two models of follow-up consultation. Exceptional Children, 57(3), 246–256. 10.1177/001440299105700307 [DOI] [PubMed] [Google Scholar]
- What Works Clearinghouse. (2022). What Works Clearinghouse procedures and standards handbook, version 5.0. https://ies.ed.gov/ncee/wwc/Handbooks
- Zumeta Edmonds R. O. (2015). Implementing intensive intervention: How do we get there from here? Remedial and Special Education, 36(2), 83–88. 10.1177/0741932514558935 [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplemental material, sj-docx-1-ldx-10.1177_00222194241271335 for Ongoing Teacher Support for Data-Based Individualization: A Meta-Analysis and Synthesis by Emma Shanahan, Seohyeon Choi, Jechun An, Bess Casey-Wilke, Seyma Birinci, Caroline Roberts and Emily Reno in Journal of Learning Disabilities
Supplemental material, sj-docx-2-ldx-10.1177_00222194241271335 for Ongoing Teacher Support for Data-Based Individualization: A Meta-Analysis and Synthesis by Emma Shanahan, Seohyeon Choi, Jechun An, Bess Casey-Wilke, Seyma Birinci, Caroline Roberts and Emily Reno in Journal of Learning Disabilities
Supplemental material, sj-xlsx-3-ldx-10.1177_00222194241271335 for Ongoing Teacher Support for Data-Based Individualization: A Meta-Analysis and Synthesis by Emma Shanahan, Seohyeon Choi, Jechun An, Bess Casey-Wilke, Seyma Birinci, Caroline Roberts and Emily Reno in Journal of Learning Disabilities

