Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Dec 2.
Published in final edited form as: Except Child. 2021 Nov 13;88(1):8–25. doi: 10.1177/00144029211024141

Multitiered Systems of Support, Adaptive Interventions, and SMART Designs

Greg Roberts 1, Nathan Clemens 1, Christian T Doabler 1, Sharon Vaughn 1, Daniel Almirall 2, Inbal Nahum-Shani 2
PMCID: PMC9718557  NIHMSID: NIHMS1851573  PMID: 36468153

Abstract

This article introduces the special section on adaptive interventions and sequential multiple-assignment randomized trial (SMART) research designs. In addition to describing the two accompanying articles, we discuss features of adaptive interventions (AIs) and describe the use of SMART design to optimize AIs in the context of multitiered systems of support (MTSS) and integrated MTSS. AI is a treatment delivery model that explicitly specifies how information about individuals should be used to decide which treatment to provide in practice. Principles that apply to the design of AIs may help to more clearly operationalize MTSS-based programs, improve their implementation in school settings, and increase their efficacy when used according to evidence-based decision rules. A SMART is a research design for developing and optimizing MTSS-based programs. We provide a running example of a SMART design to optimize an MTSS-aligned AI that integrates academic and behavioral interventions.


Multitiered systems of support (MTSS), including response to intervention and schoolwide positive behavioral supports, have been a primary focus of special education research for almost 20 years. This research has unfolded largely in silos with interventionists working on efficacious and effective treatments using randomized control trials (RCTs) in one and, in another, measurement specialists producing reliable and sensitive instruments to screen for risk and to monitor students’ response to intervention using modern measurement theories. (The silo phenomenon is common, given that scholars in the educational sciences are encouraged to specialize; see Reinholz & Andrews, 2019). As a result, MTSS features efficacious treatments as well as reliable measures, particularly in reading and mathematics at the elementary-grade levels. However, very little research has considered the rules or protocols necessary to integrate specific interventions with screening and progress-monitoring measures to provide increasingly intensive support options allocated according to student need. Questions about when and for whom a given treatment option might be most effectively assigned, altered, or ended altogether, as well as questions about the best sequence(s) in which to present treatments to different student groups, have gone largely unaddressed (Clemens et al., 2016, 2018; Deno, 2016; Kauffman et al., 2019; Stoiber & Gettinger, 2016). Accordingly, MTSS has remained more of a “framework” that school-based educators are encouraged to apply to their local circumstances rather than a fully operational and replicable intervention grounded in rigorous research (Fuchs & Fuchs, 2017). These “schoolwide” models’ lack of replicable, empirically derived protocols or decision rules may contribute to the well-documented challenges associated with fully implementing MTSS in schools, districts, and states (Balu et al., 2015; Bradshaw et al., 2010; Fallon et al., 2014; Fuchs & Fuchs, 2017; Horner et al., 2009; Solomon et al., 2012), given that educators cite difficulties using student data as the number-one barrier to implementation (Espin et al., 2107; van den Bosch et al., 2017; D. Wagner et al., 2017; Zeuch et al., 2017).

Adaptive interventions, also known as dynamic treatment regimens (Chakraborty & Murphy, 2014), dynamic treatment strategies (Lavori & Dawson, 2014), multistage treatment strategies (Thall & Wathen, 2005), and treatment policies (Lunceford et al., 2002), represent a model that may provide educators with more fully specified and thus more easily replicated and implemented MTSS-aligned interventions.

Adaptive interventions comprise not only treatment options (i.e., instructional activities or behavior-related strategies) and screening and monitoring measures, per the MTSS framework, but also explicit rules that specify how screening and monitoring information should be used in practice to select, amend, combine, or end one or more instructional and behavioral supports (Nahum-Shani & Almirall, 2019). MTSS emphasizes the importance of using data to make treatment decisions, whereas adaptive interventions provide explicit guidelines about which measures to use (e.g., silent reading efficiency), when to use them (e.g., at the beginning of the school year), and how to use them (e.g., a specific cut score) to select the best treatment option(s) for the student. Hence, fidelity to an adaptive intervention requires not only implementing the instructional or behavioral supports as intended but also obtaining student data as described and adhering to the prespecified decision rules linking this information to the most appropriate support.

The classic RCT, which involves the comparison of a treatment or a treatment package with a suitable control, remains the gold standard for evaluating a treatment’s efficacy (Collins et al., 2004), including an adaptive intervention (Almirall, Nahum-Shani, et al., 2018). Other experimental designs, however, may be more useful for building an adaptive intervention prior to evaluating its efficacy compared with a suitable control (see Collins et al., 2014; Almirall, Nahum-Shani, et al., 2018). The sequential multiple-assignment randomized trial (SMART; Lavori & Dawson, 2014; Murphy, 2005) is one of several experimental designs that can be used to address questions about formulating effective adaptive interventions (Collins et al., 2014). By empirically informing the development of decision rules that specify how student data should be used to select appropriate supports, SMART designs have the potential to promote the effectiveness and replicability of MTSS-aligned interventions.

In this introductory article to the special section of Exceptional Children, we describe adaptive interventions and their connection to MTSS. We also discuss how SMART designs can be used to inform the development of MTSS-aligned adaptive interventions. To illustrate these ideas, we introduce an ongoing research project funded by the Institutes of Educational Sciences, Behavior and Academic Supports; Integration and Cohesion (Clemens, 2018–2023), that uses a SMART design to inform the optimal integration of selected behavioral and reading supports for students in second and third grades who are at risk for learning difficulties. We also introduce the two empirical articles that accompany this introduction. Both present results from studies using SMART designs among students with autism spectrum disorder (ASD).

MTSS and Adaptive Interventions

For purposes of this article, we assume that readers have a more than passing acquaintance with MTSS. Along with many others (e.g., Sugai & Horner, 2002; Vaughn & Fuchs, 2003), we define it as a framework for providing increasingly intensive evidence-based supports (academic or behavioral) using data-driven decision making. Its key features include the integration of core instructional and behavioral support to all students; regular universal screening to identify students who may require support beyond core evidence-based treatments, including secondary and tertiary interventions; and progress monitoring of students receiving more intensive supports to inform decision making about adjusting instruction. Also, we recognize MTSS as a general framework rather than a manualized intervention, because MTSS does not feature an explicit protocol to guide treatment-related decision making in practice. The same distinction, between framework and intervention, applies to positive behavioral interventions and supports (Horner et al., 2009), response to intervention (Fuchs & Fuchs, 2017), and data-based individualization (Lemons et al., 2017; Stecker et al., 2005).

In adaptive interventions, evidence-based protocols are central, providing sequenced decision rules that explicitly specify how initial and ongoing information about a student is used to select, amend, combine, or conclude treatment.

Applying the principles for designing adaptive interventions to developing MTSS-aligned treatments can facilitate their replicability in real world implementation and enhance their efficacy when used according to the evidence-based decision rules. Further, experimental designs for systematically addressing questions about the construction of adaptive interventions, such as the SMART design, can be used empirically to inform the development of effective MTSS-aligned interventions, advancing research aims that have been a focus for almost 20 years (Fletcher & Vaughn, 2009; Jimerson et al., 2007; McIntosh & Goodman, 2016). In the next sections, we outline the specific elements of an adaptive intervention and provide several hypothetical example adaptive interventions. We then discuss in more detail how SMART design can be used to address scientific questions pertaining to these examples.

Examples of Adaptive Interventions

An adaptive intervention is formally defined as a “prespecified, replicable sequence of decision rules that guides whether, how, when, and which measures to use to make critical decisions about interventions”

(Nahum-Shani & Almirall, 2019, p. 2).

“Adaptative” refers to the use of information about the individual (e.g., characteristics and response to prior treatment) to decide whether and how to intervene at specific time points. By offering the best of available treatment options, only to those who need it, and only when they need it, adaptive interventions can advance outcomes for the greatest number of children while minimizing treatment cost and burden (Kasari et al., this issue; Nahum-Shani & Almirall, 2019).

Adaptive interventions are prevalent in prevention science (e.g., Almirall, Kasari, et al., 2018; Collins et al., 2004; Zarit et al., 2013), medicine (Tsiatis, 2019; Zhang et al., 2018), and public health (e.g., Almirall & Chronis-Tuscano, 2016; Brown et al., 2009; Collins et al., 2007). Interest among education researchers is growing (Majeika et al., 2020; Nahum-Shani & Almirall, 2019). Pelham et al. (1992, 2016) developed an adaptive intervention for children with attention-deficit hyperactivity disorder that begins (at start of the school year) with a low treatment dose (e.g., low dose of medication), monitors response status monthly (starting at Week 8) based on assessments of individualized target behaviors, and then offers more support (e.g., adding behavioral intervention to the medication) to students who show early signs of nonresponse, while responders continue with the initial low dose.

More recently, Heppen et al. (2020) developed a text-messaging intervention to address chronic absenteeism in elementary schools using SMART design. The goal was to develop an adaptive intervention that starts in the fall by offering families basic text messaging consisting of weekly reminders about the importance of attendance and same-day notifications when their children missed school, monitors absences during the fall using school-reported attendance, and then provides an intensified messaging intervention in the spring to families with a child who during the fall was absent 8% or more of school days, whereas those who missed fewer than 8% continue with the basic intervention. The interventions developed in the study are adaptive because (a) they use ongoing information about the individual to decide whether and how to intervene and because (b) these prespecified decision rules are part of the intervention.

Elements of an Adaptive Intervention

An adaptive intervention comprises four key elements (Table 1; Nahum-Shani & Almirall, 2019) that are prespecified by its developers to promote fidelity and replicability (Collins et al., 2004). These elements represent features of the intervention rather than an experimental design (i.e., an adaptive intervention does not involve randomizations to experimental conditions).

Table 1.

Elements of Adaptive Interventions.

Element Description in the context of MTSS
Decision points The sequence of occasions at which risk or response is evaluated and treatment decisions are made.
Tailoring variables The constructs (or measures of constructs) used to identify at-risk students or evaluate students’ response for the purpose of making intervention decisions.
Intervention options Different types of supports, doses, frequencies, or tactics considered at each decision point.
Decision rules Rules that link the tailoring variables to intervention options at each decision point. They specify which of the intervention options is most appropriate for a student at a given decision point.

Note. MTSS = multitiered systems of support.

Decision points

Decision points are the occasions on which choices about future or ongoing intervention should be made. In MTSS terms, they represent the points when data are reviewed and ongoing or future supports are selected. These data and the selected supports are often student level in special education but may be cluster or group level, as well. In specific operationalizations of MTSS, interventions often begin early in the school year and are then reviewed and perhaps modified at midyear. For students already receiving more intensive supports, more frequent decision points may be used for making intervention decisions (e.g., weekly or biweekly).

Intervention options are the treatments (interventions as traditionally understood) considered at each decision point. These may include the activities and routines for improving students’ learning, supporting their behavior, and engaging them in treatment. Intervention options may also include sessions’ duration (number of minutes per session), its frequency (e.g., number of sessions per week), and its discontinuation. The duration of a treatment stage (i.e., a period of time following a decision point in which the individual experiences the assigned intervention option) and the frequency of monitoring may also represent intervention options. Note that the stages of intervention in an adaptive intervention may or may not align with the tiers that are associated with MTSS (e.g., multiple stages of an adaptive intervention may occur within the same tier). Interventions can be provided, continued, discontinued, intensified, or augmented based on tailoring variables.

Tailoring variables

Tailoring variables are the data used to make intervention decisions. Information on tailoring variables may be collected at baseline and during the intervention. Baseline information can include demographics, academic and behavioral markers, disability status, and records of prior school attendance. This information can be used to select the best first-stage intervention for a student or for groups of students. Both baseline and intermediate tailoring variables (e.g., response to first-stage intervention) can inform subsequent intervention decisions. Tailoring variables that are aligned with the schoolwide MTSS framework include students’ risk status and their ongoing response to an intervention. Information about risk status is typically collected in the fall, winter, and spring as part of a school’s universal screening program. Progress-monitoring data, collected as a means of assessing ongoing response, is typically gathered weekly or biweekly for students participating in more intensive treatments (e.g., secondary and tertiary treatments). Prior interventions offered, as well as adherence to or engagement in prior interventions, can also inform intervention decisions to the extent that this information is useful in identifying subgroups of students who require intervention modifications.

Decision rules

Decision rules link the tailoring variables to intervention options; they specify which of the available options is most appropriate for a student (or a subgroup of students) at a given decision point. By specifying which intervention to offer, for whom, and under what conditions, decision rules make transparent the often-opaque process of using student data to select appropriate interventions (Collins, 2018). The decision rules in an adaptive intervention should be comprehensive (Collins et al., 2004); they should describe the recommended course of action for all subgroups in the target population (e.g., both responders and nonresponders) and cover all possible situations that may occur in practice, including what to do if the tailoring variable is missing and under what conditions and to what extent educator or practitioner judgment should be allowed. The decision rules should be clear and specific to minimize misinterpretation by implementing educators or practitioners. Decision rules can be represented as a series of “if then … otherwise” statements.

SMART Design

SMART designs (Dawson & Lavori, 2012; Murphy, 2005) are experimental designs that can be used empirically to develop adaptive interventions. A SMART design involves multiple stages of randomization; that is, some or all experimental participants may get randomized at two or more decision points. Each randomization is intended to address research questions about whether, how, and under what conditions to intervene at a particular decision point. In the next section, we discuss Project BASIC, which uses SMART design to empirically construct an MTSS-aligned adaptive intervention that integrates reading interventions and behavior interventions to improve academic outcomes for students.

An Example SMART Design: Project BASIC

Project BASIC is motivated by a recent trend toward integrated MTSS (McIntosh & Goodman, 2016; Stoiber & Gettinger, 2016), where content-area academic components are combined with behavioral, social, or emotional interventions under the assumption that schools need to support students with academic and with behavioral, emotional, and social needs using efficient and effective interventions.

In BASIC, we focus on struggling readers. We evaluate the possibility that providing supports in these adjacent areas may improve the outcome of primary interest. Hence, Project BASIC (Clemens, 2018–2023) focuses on adapting a reading intervention to include a behavior treatment for students in second grade who are struggling to read. In the next sections, we describe the intervention options considered as part of Project BASIC and the rationale for their selection. We then provide a hypothetical example of an adaptive intervention based on Project BASIC and outline the research questions that motivated the SMART design.

Intervention Options in Project BASIC

Project BASIC features four intervention options: (a) an evidence-based reading intervention, (b) an evidence-based intensified, individualized reading intervention, (c) core reading instruction, and (d) an evidence-based self-regulation intervention. The evidence-based reading intervention (Vaughn et al., 2019) addresses word reading, reading fluency, and reading comprehension. Theoretically, the intervention is based on Jeanne Chall’s (1996) developmental stages of reading, which describe the transition from basic word-reading skills to more complex comprehension skills, and on the simple view of reading (Gough & Tunmer, 1986), which argues that weaknesses in either decoding or linguistic comprehension can cause comprehension difficulties. The Project BASIC reading intervention is appropriate for struggling readers in second grade because it targets skills that educators expect students to have mastered: reading words accurately, reading text fluently, and deriving meaning from print using materials and methods with demonstrated efficacy. A recent RCT (Vaughn et al., 2019) found that the reading intervention improved multisyllabic word reading (effect size [ES] = .45) and reading fluency (ES = .47) by almost 0.5 standard deviations. The treatment improved reading comprehension (ES = .21, .22) by almost 0.25 standard deviations. The intervention includes 50 lessons (30 min each) designed for small groups of five to eight students using text-based reading (expository and narrative texts) and word-reading components. Lessons are delivered 5 days per week. The instruction supplements rather than supplants core (Tier 1) reading instruction. The intensified, individualized reading intervention is delivered in groups of fewer than five students. The groups are more homogenous in terms of their reading levels and their collective instructional needs; accordingly, intervention is more focused and intense. Core reading instruction is the regular reading instruction provided by participating schools, which will differ by teacher, school, or school district.

The self-regulation intervention targets students’ academic engagement (i.e., on task; attending to instruction or assigned task). Academic engagement is viewed as a keystone behavior, which refers to behaviors that have broad and widespread benefits for students in current and future environments (Barnett, 2005). Keystone behaviors, like academic engagement, are typically incompatible with maladaptive and antisocial behaviors (e.g., it is difficult to be engaged with a task and disruptive at the same time) and, when improved, can positively influence academic outcomes, interpersonal relationships, and the classroom or school environment for other students (DiGangi et al., 1991; DuPaul et al., 1998; Greenwood, 1996; Greenwood et al., 1994; McLaughlin et al., 1977; Prater et al., 1992; Wood et al., 1998). Keystone behaviors are attractive targets for intervention given their relative simplicity and their potential impact across multiple domains (Ducharme & Shecter, 2011).

The Project BASIC self-regulation intervention employs self-monitoring strategies, where students learn to actively monitor and record their behavior for periods of time. Self-monitoring interventions have been effectively applied across age groups to promote a variety of prosocial and self-regulated behavior. In school settings, self-monitoring strategies have proven practical with positive effects on task engagement, work completion, and response accuracy across a wide range of ages, disabilities, and settings (for reviews, see Briesch & Chafouleas, 2009; Ducharme & Shecter, 2011; Mooney et al., 2005; Reid et al., 2005; Sheffield & Waller, 2010; Webber et al., 1993). Meta-analyses of self-monitoring interventions report large-sized effects on students’ on-task behavior (Guzman et al., 2018; Reid et al., 2005). The goal in Project BASIC is to identify sequences and combinations of the selected reading and self-regulation treatments that are optimally efficacious for different student subgroups. We return to this topic in the later section SMART Design in Project BASIC.

Tailoring Variable in Project BASIC

The MTSS-aligned adaptive interventions that are part of Project BASIC use silent-reading efficiency, as measured with the Test of Silent Reading Efficiency and Comprehension (TOSREC; R. Wagner et al., 2010), as the tailoring variable. The TOSREC measures students’ academic response as depicted in Figure 1. The TOSREC is a brief screening and progress-monitoring measure that can be administered to groups of students by teachers and other school personnel. Students are given 3 min to read and verify the accuracy of as many statements as possible. The raw score represents the number of correctly verified statements in the allotted 3 min. TOSREC scores correlate highly with scores on large-scale standardized tests of reading comprehension, particularly in the early grades (R. Wagner et al., 2010). The TOSREC is often used as a research instrument; however, because we use it as a tailoring variable, it represents part of the adaptive intervention (similar to the intervention options, decision points, and decision rules).

Figure 1.

Figure 1.

Hypothetical Adaptive Intervention 1.

Decision Points in Project BASIC

The decision points in Project BASIC are at Week 0 (beginning of the school year) and at Week 10. These decision points were selected for several reasons. First, in schools already using a schoolwide version of MTSS, screening and decisions about treatment options are made soon after the beginning of the school year and again before the winter break or just after returning to school in the new calendar year, suggesting a 10-week period of time. Second, 10 weeks aligns with the What Works Clearinghouse practice guide on response to intervention (Gersten et al., 2009), which recommends providing intervention for at least 6 weeks before regrouping students according to responsiveness.

Decision Rules in Project BASIC

We assume that the first screener is administered early in the school year, just prior to the first decision point. Performance on the TOSREC, as described by the decision rule, identifies at-risk students who may benefit from supports beyond those provided by the core program alone and who, in the absence of such supports, are likely to continue struggling to read at grade level. At Week 0, a cut score at the 30th percentile on the TOSREC is used to identify at-risk students. Students scoring above the 30th percentile are considered not at risk. Our use of this cut score was based on reasoning by Torgesen (2000) that benchmarks for achievement should fall above the average range (not within it) to minimize false negatives (i.e., excluding students who were truly at risk). The 30th percentile as a cut point for risk is common in reading intervention studies (Simmons et al., 2008; Vellutino et al., 2008). At Week 10, the second decision point, a cut score at approximately the 40th percentile on the TOSREC was used to identify students who had been sufficiently responsive to the intervention and were unlikely to continue to benefit from it. This cut score is higher than at screening to ensure that students deemed to be “responsive” are truly not in need of ongoing intervention (i.e., we sought to minimize the possibility of false negatives or releasing students too early from intervention who needed it).

The TOSREC is commonly used for research purposes. We use it in the Project BASIC SMART design as a tailoring variable, making it part of the adaptive intervention rather than (or in addition to) an element of the test battery being used for research. Note, also, that Project BASIC is an ongoing research program. At present, its decision points and decision rules are experimental; the measure and the cut scores are based on the available evidence and on our own past research. Subsequent SMART designs may demonstrate that other measures provide better information for tailoring this intervention. Later studies may also show that different cut scores on the TOSREC better tailor or adapt the intervention for this population of students. Important to note is that we are not suggesting a change in the sequence, timing, or purpose of screening or monitoring measures as used in the traditional, schoolwide MTSS models, nor are we putting forward a specific MTSS-related intervention at this time.

One Hypothetical Adaptive Intervention in Project BASIC

The adaptive intervention in Figure 1 is hypothetical because it is the subject of ongoing inquiry. It aligns with Project BASIC’s program of research and represents one possible intervention for the population of second-grade students at risk for reading difficulty or for a subset of those students. It includes the intervention options, tailoring variable, and decision points and decision rules outlined earlier. In the next section, we describe the ongoing SMART design conducted as part of Project BASIC to answer scientific questions about the best way to integrate the self-regulation intervention in an adaptive intervention for students in second grade who are struggling to read.

SMART Design in Project BASIC

The SMART design for Project BASIC is presented in Figure 2. The primary research questions motivating this design are whether the self-regulation intervention should be provided to struggling second-grade readers at Week 0 to augment core instruction + reading intervention, whether self-regulation intervention should be added to core instruction at Week 10 for Stage 1 responders, and whether self-regulation treatment should be added to core instruction + intensified, individualized reading intervention at Week 10 for nonresponders to Stage 1 treatments.

Figure 2.

Figure 2.

SMART design for Project BASIC.

In Project BASIC’s two-stage SMART design, each intervention stage continues for 10 weeks, by design. Earlier, we described the rationale for the 10-week intervention stage when discussing the elements included in the adaptive intervention in Figure 1. An additional consideration relates to the challenge of recruiting schools into a randomized research design. In Texas, it is difficult for schools and districts to participate in a study that overlaps with the time in the school year when high-stakes tests are administered. The high-stakes assessment in Texas (State of Texas Assessments of Academic Readiness, or STAAR) is administered in March of the school year, beginning in third grade. The 20-week intervention (two 10-week treatment stages) ends before students begin preparing for the STAAR, which increases schools’ likelihood of participating. Because the research plan requires work in up to 30 schools over the 4-year project, schools’ willingness to participate is a priority.

The primary outcome in the Project BASIC SMART design is reading comprehension, as measured by the Gates MacGinitie Reading Comprehension subtest (MacGinitie et al., 2001), although the full battery of outcome measures also includes tests of word reading, fluency, academic engagement, and externalizing behavior. Note that the Gates MacGinitie and the other outcome measures are part of the research design, in contrast to the TOSREC, which represents a key element of the adaptive intervention, as described in an earlier section. At the beginning of the school year (indicated as Week 0 in Figure 2), at-risk students, where “risk” is defined by performance on the TOSREC, are randomized within schools to one of two treatment conditions: (a) core instruction + reading intervention + self-regulation intervention or (b) core instruction + reading intervention under intent-to-treat assumptions. At the second decision point, Week 10 in Figure 2, students are identified as responders or nonresponders based on their middle-of-the-school-year (i.e., 10-week) TOSREC performance. Responders are randomized again to either (a) core instruction or (b) core instruction + self-regulation intervention. Nonresponders are rerandomized to either (a) core instruction + intensified, individualized reading intervention or (b) core instruction + intensified, individualized reading intervention + self-regulation intervention. The combination of possible treatments across the two randomizations results in eight experimental cells; each is identified by letters A to H, respectively, in Figure 2. Randomization at Week 0 and rerandomization at Week 10 increases the odds that all measured, unmeasured, and unknown student characteristics (up to the point of each randomization) are distributed evenly across the randomized groups. This yields unbiased estimates of the average causal effect for self-regulation intervention when provided during Stage 1 (i.e., offering self-regulation intervention vs. not offering self-regulation intervention during the first stage). It also generates unbiased estimates of the average causal effect for self-regulation intervention when provided during Stage 2 for both responders and nonresponders to Stage 1 interventions.

The first question motivating this SMART design (i.e., whether the self-regulation intervention should be added at Week 0 for at-risk students) can be answered by comparing reading comprehension scores (as well as scores on the other outcomes) for students in cells A + B + C + D with reading comprehension scores for students in cells E + F + G + H in Figure 2. The second question (i.e., whether the self-regulation intervention should be added at Week 10 for responders) can be addressed by comparing cells A + E with cells B + F. The third question (i.e., whether the self-regulation intervention should be added at Week 10 for nonresponders) involves comparing cells C + G with cells D + H. Note that in this SMART design, the second randomization is stratified on two variables: (a) whether the student was assigned to the self-regulation intervention in Stage 1 and (b) whether the student responded academically to the Stage 1 intervention by the end of Week 10.

Additional Research Questions Motivating Project BASIC

What sequence of intervention options is most effective for improving reading outcomes?.

The SMART design in Figure 2 includes eight adaptive interventions that are embedded in the trial by design, each defined by a sequence of decision rules. For example, one embedded adaptive intervention, labeled 1 in Table 2, recommends providing at-risk students at Week 0 with core instruction + reading intervention + self-regulation treatment, then, at Week 10, increasing nonresponders to core instruction + intensive, individualized reading intervention and decreasing intervention for responders to core instruction. Students in cells C + A (Figure 2) are consistent with this adaptive intervention. The decision rule for Adaptive Intervention 1 is as follows: At Week 0 (first decision point), provide the at-risk student with core instruction + reading intervention + self-regulation treatment for 10 weeks; assess academic response after 10 weeks using the TOSREC; at Week 10 (second decision point), if TOSREC < 40th percentile, increase treatment to include core instruction + intensive, individualized reading intervention; otherwise, provide core instruction.

Table 2.

Embedded adaptive interventions and corresponding experimental cells in the Project BASIC SMART.

Number Adaptive Intervention Experimental conditions (cells in Figure 2)
1 First provide all at-risk students at Week 0 with core instruction + reading intervention + self-regulation treatment; then, at Week 10, increase nonresponders to core instruction + intensive, individualized reading intervention and decrease intervention for responders to core instruction. C + A
2 First provide at-risk students at Week 0 with core instruction + reading intervention + self-regulation treatment; then, at week 10, increase nonresponders to core instruction + intensive, individualized reading intervention and decrease intervention for responders to core instruction + self-regulation treatment. C + B
3 First provide at-risk students at Week 0 with core instruction + reading intervention + self-regulation treatment; then, at Week 10, increase nonresponders to core instruction + intensive, individualized reading intervention + self-regulation intervention and decrease intervention for responders to core instruction. D + A
4 First provide at-risk students at Week 0 with core instruction + reading intervention + self-regulation treatment; then, at Week 10, increase nonresponders to core instruction + intensive, individualized reading intervention + self-regulation intervention and decrease intervention for responders to core instruction + self-regulation intervention. D + B
5 First provide at-risk students at Week 0 with core instruction + reading intervention; then, at Week 10, increase nonresponders to core instruction + intensive, individualized reading intervention and decrease intervention for responders to core instruction. G + E
6 First provide at-risk students at Week 0 with core reading + reading intervention; then, at Week 10, increase nonresponders to core instruction + intensive, individualized reading intervention and decrease intervention for responders to core instruction + self-regulation intervention. G + F
7 First provide at-risk students at Week 0 with core instruction + reading intervention; then, at Week 10, increase nonresponders to core instruction + intensive, individualized reading intervention + self-regulation intervention and decrease intervention for responders to core instruction. H + E
8 First provide at-risk students at Week 0 with core reading + reading intervention; then, at Week 10, increase nonresponders to core instruction + intensive, individualized reading intervention + self-regulation intervention and decrease intervention for responders to core instruction + self-regulation intervention. H + F

Note. SMART = sequential multiple-assignment randomized trial.

A second embedded adaptive intervention (4) in Figure 2 (and Table 2) provides at-risk students at Week 0 with core instruction + reading intervention + self-regulation treatment; then, at Week 10, it increases nonresponders to core instruction + intensive, individualized reading intervention + self-regulation intervention and decreases intervention for responders to core instruction + self-regulation intervention (by discontinuing the reading intervention). Participants in cells D + B (Figure 2) are consistent with this adaptive intervention. The associated decision rule is as follows: At Week 0 (first decision point), provide at-risk students with core instruction + reading intervention + self-regulation treatment for 10 weeks; assess academic response after 10 weeks based on TOSREC; at week 10 (second decision point), if TOSREC < 40th percentile, increase intervention to core instruction + intensive, individualized reading intervention + self-regulation intervention; otherwise, provide core instruction + self-regulation treatment.

As a third example, consider Adaptive Intervention 5, which provides at-risk students at Week 0 with core reading + reading intervention, then at 10 weeks increases intervention by providing core instruction + intensive, individualized reading intervention for nonresponders and decreases treatment to core instruction for responders. This describes participants in cells G + E, and this adaptive intervention and its decision rule can be expressed as follows: At Week 0 (first decision point), provide the at-risk student with core reading + reading intervention for 10 weeks; assess academic response after 10 weeks based on TOSREC (the second decision point); if TOSREC < 40th percentile, increase intervention to core instruction + intensive, individualized reading intervention; otherwise, provide core instruction. Other embedded adaptive interventions are described in Table 2.

Outcomes for any two (or more) of the embedded adaptive interventions can be contrasted, although some contrasts may be of greater scientific interest than others. In Project BASIC, an important comparison is between Adaptive Intervention 4 and Adaptive Intervention 5 because it contrasts the reading performance of students who are exposed to the self-regulation intervention (i.e., initially and subsequently for responders and nonresponders) with that of students who receive no self-regulation training throughout. This comparison captures the effect of providing (vs. not providing) the self-regulation intervention to all students for two stages of the adaptive intervention, which addresses a primary research perspective: how useful is academic response or nonresponse at Week 10 for deciding whether to provide self-regulation at Stage 2.

Is It possible to tailor the adaptive intervention further based on baseline and time-varying information collected from the SMART design?.

A great deal of data is being collected as part of the research activities that accompany Project BASIC, and a subset of this information may be useful in deciding at baseline and at Week 10 which students may benefit from the self-regulation intervention. Data on student engagement are a candidate in this respect. As part of the BASIC SMART design, engagement data are collected at baseline using the Direct Behavior Ratings (DBR; Chafouleas, 2011; Chafouleas et al., 2009). These are flexible tools that combine the strengths of behavior rating scales and systematic direct observation and focus specifically on three keystone behaviors, including academic engagement. These data can be used to investigate whether academic engagement at baseline is useful in tailoring the first-stage options, by identifying a subgroup of at-risk second graders who would benefit from participation in the self-regulation treatment.

The DBR data are collected weekly for each student by interventionists and are also collected at the second decision point (i.e., Week 10). Thus, the DBR data can be used to select subsequent intervention options for responders and nonresponders. For example, the question might be whether DBR information is useful in identifying subgroups of Week 10 nonresponders who are more likely than not to benefit from the self-regulation intervention. Other candidate tailoring variables include externalizing behaviors at Weeks 0 and 10 and gender. Several data analytic methods are available to investigate candidate tailoring variables (e.g., Q-learning; Clifton & Laber, 2020; Nahum-Shani et al., 2012b; 2017).

Papers in This Special Section

The two papers (Fleury et al., this issue; Kasari et al., this issue) that accompany this introduction are both motivated by programs of research that focus on building an MTSS-aligned adaptive intervention. Both work with school-age children who have ASD, but they pursue different scientific questions about developing an adaptive intervention. For example, the research described in the Kasari et al. paper represents an important step in developing an adaptive intervention to accelerate the development of social and academic engagement for students who demonstrate slow response or nonresponse to the treatments. The motivating scientific questions concern the best sequence of several evidence-based social interventions. The research described in the Fleury et al. paper represents an important step in developing an adaptive shared reading intervention to address the unique—and varied—learning needs of preschool children with ASD. Key scientific questions concern the group size for providing interventions and the length of intervention stages.

Both papers recognize the importance of a replicable protocol that links student data to intervention options, and they build on the adaptive intervention model to do so. The SMART design is implemented by both projects, but their designs differ because they are meant to answer different scientific questions (see Nahum-Shani et al., 2012a, for a comprehension discussion of SMART design configurations to match different scientific questions). Notice that both studies are described as pilot studies that focus on feasibility and acceptability. Given the critical role pilot studies play in intervention development, whether in a SMART design context or more generally, we encourage more consideration of such work in Tier 1 peer-reviewed journals (Almirall et al., 2012).

Concluding Remarks

We conclude by restating several of our primary themes First,

an adaptive intervention is a prespecified, replicable sequence of decision rules that guides whether, how, when, and which measures should be used to make intervention decisions.

MTSS can benefit from adaptive intervention’s emphasis on and methods for empirically developing effective decision rules. Intervention researchers have identified effective intervention options for students using experimental approaches. Measurement specialists have produced tools that could serve as useful tailoring variables along with research-based thresholds that can maximize the utility of MTSS’ decision rules. However, designing programs of intervention-based research that effectively integrate measurement, treatment, and decision-related components in a rigorous, systematic, and empirical manner has been challenging because researchers have focused on a limited range of available research designs. The RCT remains the gold standard for evaluating program effectiveness, but it is not efficient for building adaptive interventions. To the extent that special education researchers focus on developing MTSS-aligned adaptive interventions, this work may benefit from a wider use of experimental designs, such as SMART designs and others, to answer scientific questions about how to best construct these interventions.

A common misperception is that SMART designs require prohibitively large sample sizes. As with any trial, sample size calculations in a SMART design are a function of the hypothesis tests related to the primary aim of the trial. SMART designs do not necessarily require large sample sizes or complicated sample size calculations, and there are a number of easy-to-use sample size calculators for SMART designs with continuous outcomes, binary outcomes, and repeated-measures outcomes (https://nseewald1.shinyapps.io/SMARTsize). Sample sizes for cluster-randomized SMART designs can also be calculated using online, easy-to-use calculators. We powered the reading phase of Project BASIC using a minimum detectable effects (MDE) strategy. Our budget allowed for a Time 1 sample of 600 students across 10 schools over 2 school years. Assuming usual attrition, this corresponded to detectable effects for Stage 1 treatments of 0.10 for fixed treatment effects and 0.30 for effects with considerable variability (σ2 = .10) and Stage 2 fixed treatment effects of 0.25 for all conditions except the group of students expected to respond to Stage 1 and Stage 2. The expected MDE for that group was 0.38, assuming that effect sizes do not vary across the 10 schools.

Second, using adaptive intervention principles to operationalize MTSS-aligned interventions moves the decision-making process to the foreground. Many of the elements that characterize adaptive interventions are shared by MTSS, though the labels differ. For example, the decision points, tailoring variables, and intervention options that compose adaptive interventions are in many ways similar to the screening occasions, screening measures, and increasingly intense supports in MTSS, respectively. However, missing from discussions of MTSS-aligned interventions is an emphasis on prespecified decision rules that link the screening measures to intense supports. Although some applications of MTSS may include decision rules, there is no corpus of rigorous research to our knowledge that tackles questions about the most effective protocols for connecting these moving parts in the context of a specific adaptive intervention, one devoted to a well-defined population of students, built around concrete outcomes, and comprising a known set of intervention components. Its absence makes MTSS, already a challenging treatment model, even more difficult to implement in schools, contributing to its low levels of use even in schools that report high implementation (Balu et al., 2015; Fuchs & Fuchs, 2017). Identifying evidence-based protocols as part of developing MTSS-inspired interventions gives educators clear guidance on how and when to move students from one tier to another. Adaptive interventions represent a vehicle for doing so.

Finally, SMART design is a tool for advancing the research aims that have been a focus for special education researchers for almost 20 years, but it is only one of several research designs that can be employed to build effective adaptive interventions.

Others have used factorial designs in the context of the multiphase optimization strategy (Collins, 2018), microrandomized designs (Nahum-Shani et al., 2015, 2018), and single-case experimental design and analysis (Dallery et al., 2013). This special section is not a “call to action” or even a recommended change in scholarly practice. It is simply an update on a potentially useful experimental approach for developing adaptive interventions in an MTSS context with greater rigor and sharper focus than has been possible in the past (Almirall, Nahum-Shani, et al., 2018).

Authors’ Note

Work on this manuscript by Roberts, Clemens, Doabler, and Vaughn was supported by Institute of Education Sciences Grant R324N180018 (PI: Clemens). Work on this manuscript by Roberts and Vaughn was supported by National Institutes of Health Grant P50 HD052117 (PI: Fletcher) from the Eunice Kennedy Shriver National Institute of Child Health and Human Development. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Eunice Kennedy Shriver National Institute of Child Health and Human Development or the National Institutes of Health. Work on this manuscript by Almirall and Nahum-Shani was supported by Institute of Education Sciences Grant R324B180003 (MPI: Almirall and Nahum-Shani) and the National Institute of Drug Abuse Grant R01DA039901 (MPI: Almirall and Nahum-Shani).

References

  1. Almirall D, & Chronis-Tuscano A (2016). Adaptive interventions in child and adolescent mental health. Journal of Clinical Child & Adolescent Psychology, 45(4), 383–395. 10.1080/15374416.2016.1152555 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Almirall D, Compton SN, Gunlicks-Stoessel M, Duan N, & Murphy SA (2012). Designing a pilot sequential multiple assignment randomized trial for developing an adaptive treatment strategy. Statistics in Medicine, 31(17), 1887–1902. 10.1002/sim.4512 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Almirall D, Kasari C, McCaffrey DF, & Nahum-Shani I (2018). Developing optimized adaptive interventions in education. Journal of Research on Educational Effectiveness, 11(1), 27–34. 10.1080/19345747.2017.1407136 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Almirall D, Nahum-Shani I, Wang L, & Kasari C (2018). Experimental designs for research on adaptive interventions: Singly and sequentially randomized trials. In Collins L, & Kugler K (Eds.), Optimization of behavioral, biobehavioral, and biomedical interventions (pp. 89–120). Springer. 10.1007/978-3-319-91776-4_4 [DOI] [Google Scholar]
  5. Balu R, Zhu P, Doolittle F, Schiller E, Jenkins J, & Gersten R (2015). Evaluation of response to intervention practices for elementary school reading (NCEE 2016–4000). National Center for Education Evaluation and Regional Assistance. https://ies.ed.gov/ncee/pubs/20164000/pdf/20164000_es.pdf [Google Scholar]
  6. Barnett D (2005). Keystone behaviors. In Lee SW (Ed.), Encyclopedia of school psychology (p. 279). Sage. [Google Scholar]
  7. Bradshaw CP, Mitchell MM, & Leaf PJ (2010). Examining the effects of schoolwide positive behavioral interventions and supports on student outcomes: Results from a randomized controlled effectiveness trial in elementary schools. Journal of Positive Behavior Interventions, 12(3), 133–148. 10.1177/1098300709334798 [DOI] [Google Scholar]
  8. Briesch AM, & Chafouleas SM (2009). Review and analysis of literature on self-management interventions to promote appropriate classroom behaviors (1988–2008). School Psychology Quarterly, 24(2), 106. 10.1037/a0016159 [DOI] [Google Scholar]
  9. Brown CH, Ten Have TR, Jo B, Dagne G, Wyman PA, Muthén B, & Gibbons RD (2009). Adaptive designs for randomized trials in public health. Annual Review of Public Health, 30, 1–25. 10.1146/annurev.publhealth.031308.100223 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Chafouleas SM (2011). Direct behavior rating: A review of the issues and research in its development. Education and Treatment of Children, 34(4), 575–591. 10.1353/etc.2011.0034 [DOI] [Google Scholar]
  11. Chafouleas SM, Riley-Tillman TC, & Christ TJ (2009). Direct behavior rating (DBR) an emerging method for assessing social behavior within a tiered intervention system. 10.1177/1534508409340391 [DOI] [Google Scholar]
  12. Chakraborty B, & Murphy SA (2014). Dynamic treatment regimes. Annual Review of Statistics and its Application, 1, 447–464. 10.1146/annurev-statistics-022513-115553 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Chall JS (1996). American reading achievement: Should we worry? Research in the Teaching of English, 30(3), 303–310. [Google Scholar]
  14. Clemens NH (Principal Investigator). (2018–2023). Behavior and academic supports; integration and cohesion (Project No. R324N180018) [Grant]. Institute of Educational Sciences. [Google Scholar]
  15. Clemens NH, Keller-Margulis MA, Scholten T, & Yoon M (2016). Screening assessment within a multi-tiered system of support: Current practices, advances, and next steps. In Jimerson SR, Burns MK, & VanDerHeyden AM (Eds.), Handbook of response to intervention (pp. 187–213). Springer. 10.1007/978-1-4899-7568-3_12 [DOI] [Google Scholar]
  16. Clemens NH, Widales-Benitez O, Kestian J, Peltier C, D’Abreu A, Myint A, & Marbach J (2018). Progress monitoring in the elementary grades. In Pullen PC, & Kennedy MJ (Eds.), Handbook of response to intervention and multi-tiered systems of support (pp. 175–197). Routledge. [Google Scholar]
  17. Clifton J, & Laber E (2020). Q-learning: Theory and applications. Annual Review of Statistics and Its Application, 7, 279–301. 10.1146/annurev-statistics-031219-041220 [DOI] [Google Scholar]
  18. Collins LM (2018). Conceptual introduction to the multiphase optimization strategy (MOST). In Collins L, & Kugler K, (Eds.), Optimization of behavioral, biobehavioral, and biomedical interventions (pp. 1–34). Springer. 10.1007/978-3-319-72206-1_1 [DOI] [Google Scholar]
  19. Collins LM, Murphy SA, & Bierman KL (2004). A conceptual framework for adaptive preventive interventions. Prevention Science, 5(3), 185–196. https://doi.org/1389-4986/04/0900-0185/1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Collins LM, Murphy SA, & Strecher V (2007). The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): New methods for more potent eHealth interventions. American Journal of Preventive Medicine, 32(5), S112–S118. https://doi.org/0.1016/j.amepre.2007.01.022 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Collins LM, Nahum-Shani I, & Almirall D (2014). Optimization of behavioral dynamic treatment regimens based on the sequential, multiple assignment, randomized trial (SMART). Clinical Trials, 11(4), 426–434. https://doi: 10.1177/1740774514536795. Epub 2014 Jun 5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Dallery J, Cassidy R, & Raiff BR (2013). Single-case experimental designs to evaluate novel technology-based health interventions. Journal of Medical Internet Research, 15(2), e22. 10.2196/jmir.2227 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Dawson R, & Lavori PW (2012). Efficient design and inference for multistage randomized trials of individualized treatment policies. Biostatistics (Oxford, England), 13(1), 142–152. 10.1093/biostatistics/kxr016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Deno SL (2016). Data-based decision-making. In Jimerson SR, Burns MK, & VanDerHeyden AM (Eds.), Handbook of response to intervention (pp. 9–28). Springer. [Google Scholar]
  25. DiGangi SA, Maag JW, & Rutherford RB Jr. (1991). Self-graphing of on-task behavior: Enhancing the reactive effects of self-monitoring on on-task behavior and academic performance. Learning Disability Quarterly, 14(3), 221–230. 10.2307/1510851 [DOI] [Google Scholar]
  26. Ducharme JM, & Shecter C (2011). Bridging the gap between clinical and classroom intervention: Keystone approaches for students with challenging behavior. School Psychology Review, 40(2), 257–274. 10.1080/02796015.2011.12087716 [DOI] [Google Scholar]
  27. DuPaul GJ, Ervin RA, Hook CL, & McGoey KE (1998). Peer tutoring for children with attention deficit hyperactivity disorder: Effects on classroom behavior and academic performance. Journal of Applied Behavior Analysis, 31(4), 579–592. https://doi: 10.1901/jaba.1998.31-579 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Espin CA, Wayman MM, Deno SL, McMaster KL, & Rooij M (2017). Data-based decision making: Developing a method for capturing teachers’ understanding of CBM graphs. Learning Disabilities Research & Practice, 32(1), 8–21. 10.1111/ldrp.12123 [DOI] [Google Scholar]
  29. Fallon LM, McCarthy SR, & Sanetti LMH (2014). School-wide positive behavior support (SWPBS) in the classroom: Assessing perceived challenges to consistent implementation in Connecticut schools. Education and Treatment of Children, 37(1), 1–24. https://www.jstor.org/stable/pdf/44820715 [Google Scholar]
  30. Fletcher JM, & Vaughn S (2009). Response to intervention: Preventing and remediating academic difficulties. Child Development Perspectives, 3(1), 30–37. 10.1111/j.1750-8606.2008.00072.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Fuchs D, & Fuchs LS (2017). Critique of the national evaluation of response to intervention: A case for simpler frameworks. Exceptional Children, 83(3), 255–268. 10.1177/0014402917693580 [DOI] [Google Scholar]
  32. Gersten R, Beckmann S, Clarke B, Foegen A, Marsh L, Star JR, & Witzel B (2009). Assisting students struggling with mathematics: Response to intervention (RtI) for elementary and middle schools (NCEE 2009–4060). What Works Clearinghouse. https://ies.ed.gov/ncee/wwc/Docs/PracticeGuide/rti_math_pg_042109.pdf [Google Scholar]
  33. Gough PB, & Tunmer WE (1986). Decoding, reading, and reading disability. Remedial and Special Education, 7(1), 6–10. 10.1177/074193258600700104 [DOI] [Google Scholar]
  34. Greenwood CR (1996). The case for performance-based instructional models. School Psychology Quarterly, 11(4), 283–296. 10.1037/h0088935 [DOI] [Google Scholar]
  35. Greenwood CR, Terry B, Marquis J, & Walker D (1994). Confirming a performance-based instructional model. School Psychology Review, 23(4), 652–668. 10.1080/02796015.1994.12085740 [DOI] [Google Scholar]
  36. Guzman G, Goldberg TS, & Swanson HL (2018). A meta-analysis of self-monitoring on reading performance of K–12 students. School Psychology Quarterly, 33(1), 160. 10.1037/spq0000199 [DOI] [PubMed] [Google Scholar]
  37. Heppen JB, Kurki A, & Brown S (2020). Can texting parents improve attendance in elementary school? A test of an adaptive messaging strategy (NCEE 2020–006a). U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance. https://ies.ed.gov/ncee/pubs/2020006 [Google Scholar]
  38. Horner RH, Sugai G, Smolkowski K, Eber L, Nakasato J, Todd AW, & Esperanza J (2009). A randomized, wait-list controlled effectiveness trial assessing school-wide positive behavior support in elementary schools. Journal of Positive Behavior Interventions, 11, 133–144. 10.1177/1098300709332067 [DOI] [Google Scholar]
  39. Jimerson S, Burns M, & VanDerHayden A (2007). Response to intervention. Springer. [Google Scholar]
  40. Kauffman JM, Badar J, & Wiley AW (2019). RTI: Controversies and solutions. In Pullen PC, & Kennedy MM (Eds.), Handbook of response to intervention and multi-tiered systems of support (pp. 11–25). Taylor & Francis. [Google Scholar]
  41. Lavori PW, & Dawson R (2014). Introduction to dynamic treatment strategies and sequential multiple assignment randomization. Clinical Trials, 11(4), 393–399. 10.1177/1740774514527651 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Lemons CJ, Sinclair AC, Gesel S, Gruner Gandhi A, & Danielson L (2017). Supporting implementation of data-based individualization: Lessons learned from NCII’s first five years. National Center on Intensive Intervention. https://files.eric.ed.gov/fulltext/ED575661.pdf [Google Scholar]
  43. Lunceford JK, Davidian M, & Tsiatis AA (2002). Estimation of survival distributions of treatment policies in two-stage randomization designs in clinical trials. Biometrics, 58(1), 48–57. 10.1111/j.0006-341x.2002.00048.x. [DOI] [PubMed] [Google Scholar]
  44. MacGinitie WH, MacGinitie RK, Maria K, Dreyer LG, & Hughes KE (2001). Gates-MacGinitie reading tests (4th ed.). Houghton Mifflin Harcourt. [Google Scholar]
  45. Majeika CE, Bruhn AL, Sterrett BI, & McDaniel S (2020). Reengineering tier 2 interventions for responsive decision making: An adaptive intervention process. Journal of Applied School Psychology, 36(2), 111–132. 10.1080/15377903.2020.1714855 [DOI] [Google Scholar]
  46. McIntosh K, & Goodman S (2016). Integrated multi-tiered systems of support: Blending RTI and PBIS. Guilford Press. [Google Scholar]
  47. McLaughlin TF, Laffey P, & Malaby JE (1977). Effects of instructions for on-task behavior and academic behavior: Two case studies. Contemporary Educational Psychology, 2(4), 393–395. 10.1016/0361-476X(77)90047-9 [DOI] [Google Scholar]
  48. Mooney P, Ryan JB, Uhing BM, Reid R, & Epstein MH (2005). A review of self-management interventions targeting academic outcomes for students with emotional and behavioral disorders. Journal of Behavioral Education, 14(3), 203–221. 10.1007/s10864-005-6298-1. [DOI] [Google Scholar]
  49. Murphy SA (2005). An experimental design for the development of adaptive treatment strategies. Statistics in Medicine, 24, 455–1481. 10.1002/sim.2022 [DOI] [PubMed] [Google Scholar]
  50. Nahum-Shani I, & Almirall D (2019). An Introduction to adaptive interventions and SMART designs in education (NCSER 2020–001). U.S. Department of Education, National Center for Special Education Research. https://ies.ed.gov/ncser/pubs [Google Scholar]
  51. Nahum-Shani I, Hekler EB, & Spruijt-Metz D (2015). Building health behavior models to guide the development of just-in-time adaptive interventions: A pragmatic framework. Health Psychology, 34(S), 1209. 10.1037/hea0000306 [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Nahum-Shani I, Qian M, Almirall D, Pelham WE, Gnagy B, Fabiano GA, & Murphy SA (2012a). Experimental design and primary data analysis methods for comparing adaptive interventions. Psychological Methods, 17(4), 457. 10.1037/a0029372 [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Nahum-Shani I, Qian M, Almirall D, Pelham WE, Gnagy B, Fabiano GA, & Murphy SA (2012b). Q-learning: A data analysis method for constructing adaptive interventions. Psychological Methods, 17(4), 478. 10.1037/a0029373 [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Nahum-Shani I, Smith SN, Spring BJ, Collins LM, Witkiewitz K, Tewari A, & Murphy SA (2018). Just-in-time adaptive interventions (JITAIs) in mobile health: Key components and design principles for ongoing health behavior support. Annals of Behavioral Medicine, 52(6), 446–462. 10.1007/s12160-016-9830-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Nahum-Shani I, Ertefaie A, Lu X, Lynch KG, McKay JR, Oslin DW, & Almirall D (2017). A SMART data analysis method for constructing adaptive treatment strategies for substance use disorders. Addiction, 112(5), 901–909. 10.1111/add.13743 [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Pelham WE Jr., Fabiano GA, Waxmonsky JG, Greiner AR, Gnagy EM, Pelham WE III, & Murphy SA (2016). Treatment sequencing for childhood ADHD: A multiple randomization study of adaptive medication and behavioral interventions. Journal of Clinical Child and Adolescent Psychology, 45(4), 396–415. 10.1080/15374416.2015.1105138. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Pelham WE Jr., Gnagy EM, Greenslade KE, & Milich R (1992). Teacher ratings of DSMIII-R symptoms for the disruptive behavior disorders. Journal of the American Academy of Child and Adolescent Psychiatry, 31(2), 210–218. 10.1097/00004583-199203000-00006 [DOI] [PubMed] [Google Scholar]
  58. Prater MA, Hogan S, & Miller SR (1992). Using self-monitoring to improve on-task behavior and academic skills of an adolescent with mild handicaps across special and regular education settings. Education and Treatment of Children, 15(1), 43–55. https://www.jstor.org/stable/42899243 [Google Scholar]
  59. Reid R, Trout AL, & Schartz M (2005). Self-regulation interventions for children with attention deficit/hyperactivity disorder. Exceptional Children, 71(4), 361. [Google Scholar]
  60. Reinholz DL, & Andrews TC (2019). Breaking down silos working meeting: An approach to fostering cross-disciplinary STEM–DBER collaborations through working meetings. CBE Life Sciences Education, 18, 3. 10.1187/cbe.19-03-0064 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Sheffield KIM, & Waller RJ (2010). A review of single-case studies utilizing self-monitoring interventions to reduce problem classroom behaviors. Beyond Behavior, 19(2), 7–13. https://www.jstor.org/stable/44987345 [Google Scholar]
  62. Simmons DC, Coyne MD, Kwok OM, McDonagh S, Harn BA, & Kame’enui EJ (2008). Indexing response to intervention: A longitudinal study of reading risk from kindergarten through third grade. Journal of Learning Disabilities, 41(2), 158–173. 10.1177/0022219407313587 [DOI] [PubMed] [Google Scholar]
  63. Solomon BG, Klein SA, Hintze JM, Cressey JM, & Peller SL (2012). A meta-analysis of school-wide positive behavior support: An exploratory study using single-case synthesis. Psychology in the Schools, 49(2), 105–121. 10.1002/pits.20625 [DOI] [Google Scholar]
  64. Stecker PM, Fuchs LS, & Fuchs D (2005). Using curriculum-based measurement to improve student achievement: Review of research. Psychology in the Schools, 42, 795–820. 10.1002/pits.20113 [DOI] [Google Scholar]
  65. Stoiber KC, & Gettinger M (2016). Multi-tiered systems of support and evidence-based practices. In Jimerson SR, Burns MK, & VanDerHeyden AM (Eds.), Handbook of response to intervention (pp. 121–141). Springer. 10.1007/978-1-4899-7568-3_9 [DOI] [Google Scholar]
  66. Sugai G, & Horner R (2002). The evolution of discipline practices: School-wide positive behavior supports. Child & Family Behavior Therapy, 24(1–2), 23–50. 10.1300/J019v24n01_03 [DOI] [Google Scholar]
  67. Thall PF, & Wathen JK (2005). Covariate-adjusted adaptive randomization in a sarcoma trial with multi-stage treatments. Statistics in Medicine, 24(13), 1947–1964. 10.1002/sim.2077 [DOI] [PubMed] [Google Scholar]
  68. Torgesen JK (2000). Individual differences in response to early interventions in reading: The lingering problem of treatment resisters. Learning Disabilities Research & Practice, 15(1), 55–64. 10.1207/SLDRP1501_6 [DOI] [Google Scholar]
  69. Tsiatis AA (2019). Dynamic treatment regimes: Statistical methods for precision medicine. CRC Press. [Google Scholar]
  70. van den Bosch RM, Espin CA, Chung S, & Saab N (2017). Data-based decision-making: Teachers’ comprehension of curriculum-based measurement progress-monitoring graphs. Learning Disabilities Research & Practice, 32(1), 46–60. 10.1111/ldrp.12122 [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Vaughn S, & Fuchs LS (2003). Redefining learning disabilities as inadequate response to instruction: The promise and potential problems. Learning Disabilities Research & Practice, 18(3), 137–146. 10.1111/1540-5826.00070 [DOI] [Google Scholar]
  72. Vaughn S, Roberts GJ, Miciak J, Taylor P, & Fletcher JM (2019). Efficacy of a word-and text-based intervention for students with significant reading difficulties. Journal of Learning Disabilities, 52(1), 31–44. 10.1177/0022219418775113J [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Vellutino FR, Scanlon DM, Zhang H, & Schatschneider C (2008). Using response to kindergarten and first grade intervention to identify children at-risk for long-term reading difficulties. Reading and Writing, 21(4), 437–480. 10.1007/s11145-007-9098-2 [DOI] [Google Scholar]
  74. Wagner DL, Hammerschmidt-Snidarich SM, Espin CA, Seifert K, & McMaster KL (2017). Pre-service teachers’ interpretation of CBM progress monitoring data. Learning Disabilities Research & Practice, 32(1), 22–31. 10.1111/ldrp.12125 [DOI] [Google Scholar]
  75. Wagner RK, Torgesen JK, Rashotte CA, & Pearson NA (2010). TOSREC: Test of Sentence Reading Efficiency and Comprehension. Pro-Ed. [Google Scholar]
  76. Webber J, Scheuermann B, McCall C, & Coleman M (1993). Research on self-monitoring as a behavior management technique in special education classrooms: A descriptive review. Remedial and Special Education, 14(2), 38–56. 10.1177/074193259301400206 [DOI] [Google Scholar]
  77. Wood SJ, Murdock JY, Cronin ME, Dawson NM, & Kirby PC (1998). Effects of self-monitoring on on-task behaviors of at-risk middle school students. Journal of Behavioral Education, 8(2), 263–279. 10.1023/A:1022891725732 [DOI] [Google Scholar]
  78. Zarit SH, Lee JE, Barrineau MJ, Whitlatch CJ, & Femia EE (2013). Fidelity and acceptability of an adaptive intervention for caregivers: An exploratory study. Aging & Mental Health, 17(2), 197–206. 10.1080/13607863.2012.717252 [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Zeuch N, Förster N, & Souvignier E (2017). Assessing teachers’ competencies to read and interpret graphs from learning progress assessment: Results from tests and interviews. Learning Disabilities Research & Practice, 32(1), 61–70. 10.1111/ldrp.12126 [DOI] [Google Scholar]
  80. Zhang Y, Laber EB, Davidian M, & Tsiatis AA (2018). Interpretable dynamic treatment regimes. Journal of the American Statistical Association, 113(524), 1541–1549. 10.1080/01621459.2017.1345743 [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES