Abstract
This article compares and contrasts the main features of dynamic testing and assessment (DT/A) and response to intervention (RTI). The comparison is carried out along the following lines: (a) historical and empirical roots of both concepts, (b) premises underlying DT/A and RTI, (c) terms used in these concepts, (d) use of these concepts, (e) evidence in support of DT/A and RTI, and (f) expectations associated with each of the concepts. The main outcome of this comparison is a conclusion that both approaches belong to one family of methodologies in psychology and education whose key feature is in blending assessment and intervention in one holistic activity. Because DT/A has been around much longer than RTI, it makes sense for the proponents of RTI to consider both the accomplishments and frustrations that have accumulated in the field of DT/A.
Keywords: dynamic testing and assessment (DT/A), response to intervention (RTI), special education, children with special educational needs
In the history of human thought, there are many examples of overlap in ideas. Sometimes these overlaps are concurrent (e.g., several of Watson and Crick’s contemporaries were close to revealing the structure of DNA, but Watson and Crick did it first); other times they are sequential (e.g., Mendel’s laws were independently rediscovered long after his death). The purpose of this essay is to explore the degree of overlap between the concepts of dynamic testing and assessment (DT/A) and response to intervention (RTI).
This comparison is carried out not to enter the debate on the validity of operational and/or formal definitions of specific learning disabilities (SLDs; Fletcher, Lyon, Fuchs, & Barnes, 2007; Gerber, 2005; Kavale, 2005; OSEP, 2001) or to comment on the current discussion in education and policy of the Individuals with Disabilities Education Improvement Act (IDEIA; 2004) or the No Child Left Behind Act (NCLB; 2002). The exploration of this overlap centers on the theoretical contexts of both concepts.
As Kurt Lewin noted, “There is nothing more practical than a good theory” (1952, p. 169). This statement has multiple meanings: Lewin assumes that a theory is good (i.e., able to introduce insights and explanations to previously unexplained phenomena) if it has implications and applications for practice. These thoughts have been discussed by many distinguished theorists and practitioners, but the message has always been pretty much the same—theorists should attempt to address real-life problems, and practitioners should attempt to use good theories to solve practical problems (Lens, 1987; Sarason, 1978).
It is in light of this type of thinking that this article is written. A theoretical comparison of concepts, as exemplified below, can be fruitful for several reasons. First, such a comparison can help avoid what Marsh (1994) referred to as the “jingle-jangle fallacy,” in which the same constructs are labeled (the jingle phenomenon) or operationalized (the jangle phenomenon) differently. Comparing concepts that were generated in different contexts but seemingly address the same or similar constructs may lead to more clarity with regard to the use of these concepts and the characterization of the constructs of interest. Second, such a comparison might assist the field in learning more about the practical applications of these concepts. Third, such a comparison might sharpen the role of these concepts and the corresponding constructs among other related concepts and constructs.
The idea of the “relatedness” of DT/A and RTI is not new to the literature (Abbott, Reed, Abbott, & Berninger, 1997; Barrera, 2006; Ehren & Nelson, 2005; Gerber, 2002; Gersten & Dimino, 2006). There is a tacit appreciation of a certain degree of closeness between the two concepts. Yet to my knowledge, until now there has been no attempt to compare these concepts systematically. The comparison is carried out along the following lines. First, I briefly comment on the historical and empirical roots of both concepts. Second, I touch on the premises underlying DT/A and RTI. Third, I present terms used in these concepts. Fourth, I comment on processes associated with the use of these concepts. Fifth, I briefly summarize evidence in support of DT/A and RTI. Sixth, I describe expectations associated with each of the concepts.
I then review the noted points of comparison (see also Table 1) and remark on the boundary or lack thereof between DT/A and RTI. I conclude the essay by summarizing my thoughts on the overlap between DT/A and RTI and their corresponding underlying constructs.
Table 1.
Dimensions | DT/A | RTI |
---|---|---|
Context | Aim: To sort and place disadvantaged children for educational purposes | Aim: To revise precision of admission to special education |
Main idea: Assessment should include instruction to generate more accurate and informative data | Main idea: Appropriateness of instruction should be monitored with systematic assessment | |
Accent: Assessment | Accent: Instruction | |
Premises | Aim: To realize learning potential | Aim: To meet age-/grade-appropriate standards |
Main ideas:
|
Main ideas:
|
|
Accent: Underachievement as compared with the child’s learning potential regardless of previous educational experience | Accent: Underachievement as compared with typically developing children of age and grade in context of adequate teaching | |
Debate: Learning potential is generalizable vs. domain specific | Debate: Qualifications for entry into RTI | |
Main concepts |
|
|
Process |
|
|
Types of evidence |
|
|
Expertise requirements | Types of implementations
|
Types of implementations
|
Historical and Social Context
To find the shared and unique roots of psychological and educational concepts, it is often useful to go back in history in an attempt to understand why these concepts were introduced in the first place. This time travel into the not-too-distant past, in this case, provides some perspective for comparing the concepts of DT/A and RTI.
Dynamic Assessment
Dynamic assessment is an “umbrella term used to describe a heterogeneous range of approaches” (Elliott, 2003, p. 16) in psychology and education whose core is in blending instruction into assessment. Multiple reviews of DT/A (see Note 1) point to three historical sources of this approach, all emerging, curiously, in the late 1920s and early 1930s: one European (e.g., Rey, 1934), one Russian (e.g., Vygotsky, 1934/1962), and one American (e.g., Thorndike, 1924). Table 2 briefly exemplifies DT/A’s selected modern theories and concepts.
Table 2.
Approach | Representative Researchers and Clinicians | Major Concepts | Psychological and Educational Targets |
---|---|---|---|
Mediated learning experience | R. Feuerstein C. Lidz D. Tzuriel |
Cognitive modifiability | Cognitive abilities |
Learning potential | M. Budoff | Educability Mental retardation |
Cognitive abilities |
Graduated prompts approach | A. Brown J. Campione E. Peña |
Zone of proximal development Cognitive abilities |
Academic skills |
LearnTest | J. Guthke M. Hessels |
Learning gain | Cognitive abilities |
Academic skills | |||
Testing the limits | J. Carolson K. Wiedle |
Maximul level of performance | Cognitive abilities |
Interactive assessment | K. Haywood | Mental retardation | Cognitive abilities |
Training/assessing a single psychological function | L. Swanson | Cognitive processing | Memory |
Without repeating what has been said many times before (e.g., Grigorenko & Sternberg, 1998; Haywood & Lidz, 2007; Lidz & Elliott, 2000), it is crucial to point out the common threads that bind these early international precursors of the DA paradigm. This common platform can be summarized in three assumptions shared by authors then working on different continents:
Given the diverse educational experiences of children brought up in dissimilar cultural circumstances, conventional (also referred to as unassisted or static) assessment might not adequately capture the level of cognitive development.
Psychologists and educators should be interested not in where children are now, given their previous educational experience, but where they can be tomorrow, assuming that they are given adequate educational intervention from now on.
There is little use in assessing for the sake of assessment; assessment should be carried out as a part of intervention (i.e., being assisted or dynamic in nature) and for the sake of selecting or modifying intervention.
These principles were formulated in general terms by a number of U.S., European, and Russian psychologists in the 1920s and 1930s; were further developed through multiple specific implementations in the 1970s and 1980s; and currently are being differentiated substantially by a number of professionals working in the field today (see Note 2). These approaches differ in their theoretical formulations, operationalization of the three general principles just noted, practical goals, specific ways of interaction with students (both for purposes of assessment and instruction), amount of accumulated data, and popularity. I elaborate on the similarities and differences among different DT/A approaches throughout the article.
It is also of interest that ideas of DT/A were often developed because of practical issues related to assessing the potential of and developing pedagogical strategies for children with special educational needs (e.g., mentally retarded, deaf, and blind) or inadequate educational experiences (e.g., new immigrants or the socially disadvantaged). Generally speaking, it was assumed that tests of abilities, when administered in a conventional, unassisted manner, captured what the child had already mastered (i.e., indicators of abilities were equated with indicators of achievement; Sternberg & Grigorenko, 2001). Yet it was argued that “true” abilities of such special children, conceptualized as learning potential, are better captured through tests administered in an assisted, scaffolded manner (for a review of the historical roots of DT/A, see Sternberg & Grigorenko, 2002). Thus, the driving motivation behind a variety of approaches to DT/A was to develop instruments capable of generating data indicative not of what a child can do now, given an inadequate educational history, but rather what the child can do in the future, assuming that pedagogical instructions are tailored to his or her needs.
As noted earlier, the concept of DT/A is only slightly younger than the concept of intelligence assessment itself (Kozulin, 2005), yet the amount of literature on DT/A is nowhere near that on conventional assessment. That disparity leads to the perception of DT/A as a “novel” assessment paradigm and its perception as an “adolescent” rather than a “mature adult” in the field of testing and assessment (Grigorenko & Sternberg, 1998).
RTI
Philosophically, the roots of the concept of responsiveness or RTI (see Note 3) are in attempts to find the best way to educate children by taking into account patterns of response and adjusting pedagogical strategies depending on these responses (e.g., Bateman, 1965; Haring & Bateman, 1977). Pragmatically, the term RTI was propagated to address the disproportionate number of ethnic minority students identified for special education. As a concept relevant to the identification of SLD, RTI was used in 1982 by a National Academy of Sciences committee (Heller, Holtzman, & Messick, 1982). The next step was the operationalization of the concept using curriculum-based assessments (as described in L. S. Fuchs & Fuchs, 2006).
The concept of RTI is closely related to Reading First, one of the central components of NCLB (2002), and to the reauthorization of the IDEIA (2004), which introduced RTI as a possible alternative to the intelligence-achievement discrepancy for identifying SLD. These links are easily understood because NCLB and IDEIA (and, correspondingly, the concept of RTI) have been worked on and promoted by overlapping teams of policy makers.
Defining the functional role of RTI, the IDEIA (2004) states that when determining whether a child has an SLD, a local education agency (LEA) “shall not be required to take into consideration whether a child has a severe discrepancy between achievement and intellectual ability” in oral expression, listening comprehension, written expression, basic reading skills, reading comprehension, mathematical calculation, or mathematical reasoning and “may use a student’s response to scientifically-based instruction as part of the evaluation process” (Sec. 614). Yet the document is cautious in terms of establishing limitations of various components of RTI by stating that “the screening of a student by a teacher or specialist to determine appropriate instructional strategies for curriculum implementation shall not be considered to be an evaluation for eligibility for special education and related services” (IDEIA, 2004, Sec. 614). It also does not dismiss the discrepancy criterion; in fact, the concept of RTI is introduced in the presence of, not in place of, other ways to identify special educational needs.
It is important to note that one of the driving forces behind RTI is prevention. The method previously recommended by legislation, the discrepancy criterion (i.e., the discrepancy between IQ and achievement, defined in many district-specific ways), arguably resulted in a number of false-positive cases in which children were identified with SLD and received special education when they could have been assisted adequately within the framework of regular education, and a number of false-negative cases in which children who really needed support were not properly identified. Thus, RTI is viewed as one possible, but not the only (Hollenbeck, 2007), way of restoring the balance between the overidentification of those children who are not, in fact, eligible for special educational services and the underidentification of those children who do indeed need such services. To support RTI-based approaches, the IDEIA specifies that LEAs can use up to 15% of their IDEIA-related federal funds (2004, Sec. 614) and up to 50% in additional funds received through Title 1 activities for early identification and prevention services. Thus, prevention and more effective teaching in the context of regular education are key concepts associated with RTI, such that the entry point into special education becomes both more specific and more sensitive to those who need such education.
It is also essential to note that an important line of research that has substantively contributed to the foundation of RTI involves children with reading difficulties, in particular those difficulties manifested at the word level (Fletcher et al., 2007). These children with reading difficulties demonstrate deficits when performing tasks related to reading and respond positively to remediation intervention strategies targeting reading whether they display a discrepancy between general level of ability and achievement (Fletcher et al., 1994; Morris et al., 1998; Stanovich & Siegel, 1994; Torgesen, Morgan, & Davis, 1992). In this context, it has been argued that RTI is reflective of the movement away from the “wait to fail” approach assumed by the IQ-achievement discrepancy model and from putting low–socioeconomic status (SES), low-achievement students, often minority and ESL students, at a disadvantage introduced by the IQ-achievement approach.
RTI proponents argue that its introduction will allow better monitoring of teaching practices in regular and special education classrooms and, as a result, an improvement of teaching overall. RTI is assumed to be useful to all struggling readers, beginning in a regular classroom and focusing on the best possible teaching approach, which, if needed, can be administered within the context of a special education classroom.
As implied earlier, the introduction of RTI has policy-, economic-, and service-based considerations. Correspondingly, the propagation of RTI is expected to be associated with a number of benefits (American Speech-Language-Hearing Association [ASHA] et al., 2006; Cartiella, 2006). Specifically, RTI, as a substantial (although not sole) component of the process of establishing eligibility for special educational services, is expected to
reduce the waiting time for additional instructional support,
increase the number of students who succeed in the framework of general education and decrease the number of students inappropriately referred for special education,
improve the quality and quantity of information about the educational progress and instructional needs of individual students,
minimize testing and maximize instruction, and
monitor and improve the quality of instruction both in general and specific educational settings.
As a relatively new concept, RTI is not linked to a substantive body of literature or research. Thus, the concept is rife with unanswered questions. Among the noted limitations (Cartiella, 2006; Danielson, Doolittle, & Bradley, 2005; Ehren & Nelson, 2005; Gerber, 2005; Mastropieri & Scruggs, 2005) of RTI today are
a lack of clarity in translating information obtained in the context of RTI into regulations for identifying children with special educational needs;
the primary focus of RTI on elementary grades;
the primary focus of RTI on reading, with some limited information available for math and very little information for other academic skills and domains;
the primary focus on SLDs and limited attention to other special needs;
a lack of consideration of level of ability (i.e., lack of provision for children with high levels of ability who, although achieving at the average level of ability, underachieve for their level of potential);
a lack of differentiation of limited English proficiency and low SES as sources of underachievement;
the need to combine RTI-based information with other sources of information (e.g., on general ability and cognitive functioning and behavior);
a lack of working models incorporating RTI consistently with existing practices within the LEA or private educational settings; and
a lack of professionals and/or professional training enabling the implementation of RTI.
To summarize this section on the historical and social context of DT/A and RTI, the following points should be made. First, as concepts, both DT/A and RTI appeared in response to specific practical needs and under an assumption that existing assessment approaches were inadequate in labeling children as low ability and/or low achieving and that these labels were inaccurate and ineffective in selecting appropriate pedagogical methods and predicting future developmental trajectories of the assessed children. Of note is that both concepts responded to very similar practical needs of improving precision in addressing the educational profiles of children with special needs. Historically, the concept of DT/A is much older than that of RTI. DT/A originated in the context of classifying and educating large numbers of children with disadvantaged educational backgrounds (e.g., orphans in the case of Vygotsky and immigrants in the case of Feuerstein; see below for details), so large numbers of students could be educated in regular classrooms. RTI is closely linked to attempts to increase sensitivity and specificity of the gates to special education so that only those who really need special services receive them.
Second, it is important to appreciate that both concepts assume an interaction, concurrent or sequential, of assessment and instruction. Here it might be useful to differentiate the integration of instruction into assessment and integration of assessment into instruction (Allal & Ducrey, 2000). The historical and sociocultural contexts of DT/A and RTI suggest that whereas DT/A is an example of the integration of instruction into assessment, RTI is an example of the integration of assessment into instruction. Thus, the question is what is added to what (assessment to instruction or instruction to assessment) and in what proportion. In short, both DT/A and RTI are means of educational diagnosis (domain general and/or specific), with DT/A aimed more at how instruction informs assessment and RTI aimed more at how assessment informs instruction. Both attempt to close the gap between instruction and assessment.
Premises
To understand the relationships between the two constructs, it is important to review their respective presuppositions and evaluate their overlaps and specificity.
DT/A
The concept of DT/A is grounded in the assertion that conventional assessment is neither accurate nor instrumental in evaluating abilities and skills or satisfying the educational profiles of children with special needs. Thus, the very initiation of DT/A is rooted in the concern that the information provided by traditional (conventional or static) intellectual assessment is inaccurate with regard to such children’s intellectual potential and of no real value to teachers (Elliott, Lidz, & Shaughnessy, 2004). This assertion, in turn, is linked to the hypothesis that (a) such children (see Note 4) must be provided with a special type of evaluation, DT/A; (b) this evaluation should result in more accurate data on the current level of performance and a profile of cognitive, motivational, and academic strengths and weaknesses; and (c) these data are translatable into remedial educational strategies. In short, the idea behind DT/A is to identify and assess changeable or remediable skills, describe and classify them, and select or devise instructional methods based on this classification (Embretson, 2000). DT/A is conceived as a package deal including both assessment and intervention, thus overcoming the disconnect between the assessor and the instructor and between assessment and teaching (Ryba, 1998).
It is important to note that although, as a systematic movement, DT/A arose in the late 1970s and early 1980s in the context of testing for learning potential, generalizable across all domains of cognitive functioning, its conceptualization has been reformulated by the ongoing debate about the domain specificity of learning potential. Whereas some DT/A supporters still advocate domain-general approaches (e.g., Feuerstein, Feuerstein, Falik, & Rand, 2002), others argue for domain specificity of DT/A, stressing its importance for specific achievement areas (e.g., reading, math, and science; Camilleri, 2005; Gerber, 2000; Hamers, Pennings, & Guthke, 1994; Miller, Gillam, & Peña, 2001).
RTI
Like DT/A, RTI capitalizes on underachievement—the emphasis on the prefix under indicates a focus on children who do not achieve at school at a child-adequate level and therefore require a modification of instruction to close the gap between the actual and expected levels of achievement (as determined by educational age-/grade-appropriate standards). The fundamental drive behind RTI is “an effort to personalize assessment and intervention” (D. Fuchs & Fuchs, 2006, p. 95). Thus, the purpose of RTI is twofold—to provide needy students with instruction matched to their needs and to provide further evaluation of these needs to devise additional support and instruction.
The basis of RTI is the assumption that students are likely to have difficulties acquiring academic skills if their response to commonly effective, evidence-based teaching is substantially weaker or absent when compared with their peers (Berninger & Abbott, 1994; L. S. Fuchs & Fuchs, 1998; Gresham, 2002; Vaughn & Fuchs, 2003). This conceptualization of RTI is associated with a number of implications. First, RTI is inherently about schooling and academic skills. In fact, RTI makes no assumptions about the level of a child’s abilities; it is all about academic performance: A child of a particular age is expected to learn in school given proper teaching. Second, RTI differentiates between low achievement due to improper teaching and low achievement due to the child’s individual profile. Because everyone else in class is learning at the anticipated level and rate (assuming evidence to support that), the teaching is assumed to be effective. Correspondingly, those children who are not learning have a disability that constrains their progress within conventional instruction. Third, RTI assumes the presence of a clear understanding of the average progress on a given academic skill at a given age and grade level. Thus, all RTI-based approaches imply comparisons between a given child and “the norm.”
To summarize, both DT/A and RTI are concerned with the notion of underachievement. However, the dynamics of this consideration are different. The first difference is whether this underachievement is contextualized within previous teaching. DT/A assumes that underachievement is a part of the profile of a child with special educational needs; its major objective is to differentiate underachievers based on their learning potential. The assessment of learning potential is the nucleus of DT/A; placement and remediational decisions should be made based on the results of this assessment. Thus, DT/A is not directly concerned with the quality of teaching the child received prior to DT/A. For RTI, the registration of underachievement is relevant only if there is evidence that the child in question has received adequate instruction and that appropriate teaching has not increased the child’s levels of performance. Thus, RTI requires gathering information on both the quantity and quality of teaching already received by the child. The second difference is in generality versus specificity of underachievement. Historically, DT/A has aimed at revealing “general learning potential,” although lately there have been numerous applications exemplifying the use of DT/A within specific academic domains (for a review, see Lidz & Elliott, 2000). The concept of RTI is exclusively about academic achievement in specific domains. In fact, this very property of RTI has been criticized (Hale, Naglieri, Kaufman, & Kavale, 2004) for its apparent inability to generalize specific difficulties in mastering reading or math to cognitive profiles characteristic of all children with SLD.
Terms and Theories
What can be learned about the overlap between the concepts of DT/A and RTI from a brief review of terminology used in the context of these concepts?
DT/A
Having been around for a while, the concept of DT/A has generated quite a number of associated terms and theoretical contexts. Yet the notion of “construct fuzziness” has been used to stress the apparent lack of clarity of certain approaches within the DT/A paradigm (Jitendra & Kameenui, 1993). Here I briefly present the terms (see Note 5) most used in the field of DT/A in some or across a number of approaches (for a fuller discussion, see also Jeltova et al., 2007; Lidz & Elliott, 2000; Sternberg & Grigorenko, 2002).
Zone of proximal development (ZPD) is a concept introduced by Vygotsky (1934/1962) to capture the distance between what a child can do by himself or herself versus what he or she can do with a bit of assistance from adults (Chak, 2001; Kanevsky & Geake, 2004; Wennergren & Ronnerman, 2006) or peers (Fernandez, Wegerif, Mercer, & Rojas-Drummond, 2001–2002; Tzuriel & Shamir, 2007). ZPD can be understood as a psychological reality where mastery of skill and knowledge acquisition is taking place in real time. Thus, the concept of ZPD is social and interactive by definition—it signifies the assumption that cognitive development occurs within social interactions; in other words, it presumes that the development and mastery of cognitive skills are best accomplished within the context of assistance, scaffolding, and explicit/direct teaching (see below for descriptions of these concepts). The ZPD is one facet of the general idea developed by Vygotsky that child development unfolds as a sequence of external (exteriorized) and internal (interiorized) cognitive events: Knowledge, skills, and competencies initially are acquired externally, through interactions with more knowledgeable and experienced others, and then personified or interiorized by the child. Of note is that these “others” might be represented by real humans or ideal humans reflected in products of human knowledge (e.g., books, video or audio recordings, and other objects containing concentrated knowledge).
The concept of the experienced other is captured by the concept of a mediator in the approach of Reuven Feuerstein. Thus, Feuerstein’s ideas are intimately related to Vygotsky’s. The concept of mediator is closely linked to the concept of the mediated learning experience (MLE; Feuerstein et al., 2002)—a special quality of interaction between a testee/learner and a tester/mediator. The general role of the mediator is to observe how a student approaches a problem and to explicate and remediate the difficulties experienced by a student. Mediation arises through joint engagement with a cognitive task at hand: The role of the mediator is to gauge the level of the student’s functioning and to reformulate the task in such a form that the student can master the task. Thus, an MLE is characterized by intentionality and reciprocity of the interaction (i.e., the student is open to receiving help and the moderator is willing to provide it); mediation of meaning (i.e., the moderator is directly engaged in managing the student’s cognitive process and his or her emotions and motivation about that process), and transcendence (i.e., the moderator needs to bridge existing experience and transfer the functions learned to new situations; for further discussion and examples, see Kozulin, 2005; Kozulin & Garb, 2004; Tzuriel & Shamir, 2007). Mediation is used both for clinical purposes in the context of MLE and as a primary assessment technique in the instrument developed by Feuerstein and his colleagues, the Learning Potential Assessment Device (LPAD).
The “graduated prompt” approach refers to an assessment paradigm in which the extent of teaching needed for mastery and transfer is tracked (Campione, 1989; Campione, Brown, Ferrara, Jones, & Steinberg, 1985; Kanevsky, 1995). There are four terms typically associated with this approach. Probing refers to a technique of asking a sequence of clarifying questions; by means of answering these questions, the student is expected to arrive at the solution in an independent or semi-independent manner. Prompting, just like probing, intends to maximize the student’s independence in solution finding, but rather than being asked questions, a child is presented with a sequence of hints that determine the outcome of finding the correct answer or the right solution to the problem. Collectively, probing and prompting are referred to as assisted learning/teaching; the success of assisted learning is verified with the transfer of acquired skills to tasks similar to but not the same as those on which the skills were mastered.
Testing the limits (Carlson & Wiedl, 1992) refers to an assessment technique used for determining the maximum level of child performance at a given time. This technique is used for extrapolating additional clinical information and for generating placement and instructional decisions.
Learning potential refers to what a child can do given proper support (Hamers & Resing, 1993). Learning tests (Guthke, 1982, 1992) are devised to assess learning potential; these tests have a structure of pretest, teaching, and posttest. Learning tests can be subdivided into long-term (i.e., those tests that require training students on relevant tasks between a pretest and posttest; Campione, 1989; Resing, 1997) and short-term (i.e., those tests that offer assistance while testing is ongoing; de Leeuw, 1983; de Leeuw, van Daalen, & Beishuizen, 1987; Meijer & Riemersma, 2002) learning tests.
The majority of DT/A approaches adapt the pretest/posttest model of testing and assessment. Pretest assumes the presentation of testing materials prior to instruction; posttest testing materials are administered after it. The interval between pretest and posttest can be filled with a number of types of instruction. Some approaches use direct instruction, when an adult/teacher leads the acquisition of a skill; no or virtually no input is anticipated from the students. Other approaches use cognitive (e.g., thinking out loud or writing on a blackboard or some other media) and physical (e.g., doing or using manipulatives) modeling, involving teaching techniques that are based on the demonstration of finding a solution for a task so that students can learn via observing and trying. Scaffolding (Wood, Bruner, & Ross, 1976) refers to an instructor (or assessor)–initiated interaction that engages the student (or the assessed) in an activity beyond the student’s current skill or understanding. Scaffolding assumes both calibrated support and instruction and ongoing diagnosis of the child’s progress toward the acquisition of the targeted skill or understanding. Thus, scaffolding is an adult/teacher–led process toward concept/skill acquisition. It is assumed that the support is at its maximum at the beginning of the interaction and will be withdrawn by its end (Stone, 1998). Discovery assumes an unaided type of learning in which children are expected to find out a particular rule or make a particular observation on their own. Given the variety of different DT/A applications using the pretest/intervention/posttest model, a typology of applications was proposed; the so-called sandwich format assumes only one layer of instruction surrounded by testing. The so-called cake model assumes multiple layers of instruction intermixed with testing (Sternberg & Grigorenko, 2001).
Following an early conceptualization of two categories of children, so-called gainers (those who benefit from assistance/hints and change their performance) and nongainers (those who do not benefit from assistance/hints) introduced by Budoff (1987a, 1987b), a number of DT/A paradigms use these (e.g., Büchel, Schlatter, & Scharnhorst, 1997; Lauchlan & Elliott, 2001) or similar (e.g., high scorers, learners, and nonlearners; Carlson & Wiedl, 2000) categories of learners.
RTI
Given that RTI is comparatively young, it is not surprising that there are fewer terms associated with this concept. Yet although nobody has said it in writing thus far, there is a similar if not greater level of construct fuzziness associated with RTI.
The nucleus of RTI is degree of responsiveness, which can be defined in an absolute (i.e., whether the child reached the level adequate to his or her age and grade or is at risk for becoming or identifiable as a child with SLD) or a relative form (i.e., the child’s rate of growth of a skill or whether the progress the child is making in response to instruction is adequate). Nonresponders (or inadequate responders) are students at risk for or with evidence of not benefiting from an intervention. The literature today does not have sufficient evidence prescribing the best and the most effective method of quantification and qualification of degree of responsiveness; in fact, it presents multiple possibilities (e.g., D. Fuchs & Deshler, 2007) and only rare examples of investigations of how convergent or divergent they are (e.g., Barth et al., 2008).
It is suggested that probable nonresponders (i.e., children at risk) be preselected prior to the beginning of the delivery of an evidence-based pedagogical intervention. This group can be identified based on the results of standardized or high-stakes testing (e.g., the proposed cutoff is 25%; D. Fuchs & Fuchs, 2006), norm-referenced measures, or a set of indicators capturing benchmark performance using data from the previous academic year. Alternatively, this group can be identified with the help of a screening instrument whose utility in predicting the benchmarks for a given academic grade or the outcomes of standardized testing is known. Universal screening has been shown to reduce the number of false positives categorized as nonresponders (Compton, Fuchs, Fuchs, & Bryant, 2006).
Currently, the literature contains examples of multiple ways of quantifying RTI (e.g., D. Fuchs & Deshler, 2007). Broadly categorized, they can be grouped into methods using the postintervention outcome (or “final”; see Barth et al., 2008) status, the rate of growth (or change while responding to intervention), or a mixture of both. The outcome-based methods are based on specific thresholds such as reaching a particular score on a standardized achievement test (Torgesen et al., 2001) or a criterion-referenced benchmark (Good, Simmons, & Kameenui, 2001); children scoring below such a threshold would be referred to as nonresponders. The growth-based methods take into account the comparative statistic for a given child’s learning rate in relation to the rate of a normative reference group (Marston, 1989). For example, if data can be collected from the whole class (or grade) of students, nonre-sponders can be identified as those whose level of performance and rate of growth (slope) were below the median slope parameter (Vellutino et al., 1996) or other normative cutoff points (D. Fuchs, Fuchs, & Compton, 2006). There are also examples of using both outcome and growth indicators simultaneously, so nonresponders are identified by demonstrating both achievement levels below a benchmark and learning rates below that of a reference group (e.g., L. S. Fuchs & Fuchs, 1998; Speece & Case, 2001). When multiple methods of identifying non-responders were compared, the results showed that in general the agreement between various methods is poor, thus stressing the necessity of using more than one approach to documenting an inadequate response (Barth et al., 2008).
Progress monitoring is another essential component of RTI. It is assumed that the performance of nonresponders is constantly monitored with a set of short instruments relevant to the academic skills in which the student at risk might fail to excel. Some have proposed to define responsiveness as performance at or above the 16th percentile (D. Fuchs & Fuchs, 2006). This definition of responsiveness assumes (a) the presence of national or local norms, or (b) specifications of what the 16th percentile corresponds to in the language of benchmarks or anticipated progress, or (c) data on the whole classroom of children. If neither (a) nor (b) is available and (c) cannot be collected, then responsiveness can be operationalized within the child as a measure of the child’s improvement between consecutive time points. For example, using an indicator of change (i.e., a positive step in building a specific academic skill) whose magnitude is greater than the standard error of estimate (D. Fuchs & Fuchs, 2006) has been suggested. It is important to note that progress monitoring is closely related to formative assessment. In other words, it has the function not only of registering what happens to nonresponders within the context of a particular pedagogical intervention but also of helping the teacher to figure out whether a particular pedagogical approach is working or whether changes are needed. For example, if the anticipated change in the targeted academic skill is not registered for the majority (e.g., 80%) of the students, then the rate of nonresponders is much greater than expected and is thus an indication that teaching strategies and materials should be adjusted.
Examples of progress monitoring are the curriculum-based approaches of curriculum-based assessment (CBA) and curriculum-based measurement (CBM)—teacher-administered ways of tracking and recording children’s progress in specific learning areas (for detail, see http:www.studentprogress.org). There are examples of CBA in literacy and numeracy for Grades 1 through 6 (e.g., Dynamic Indicators of Basic Early Literacy Skills, http://dibels.uoregon.edu; AIMSweb, http://www.aimsweb.com; Monitoring Basic Skills Progress, http://www.proed-inc.com; and Yearly Progress Pro, http://www.mhdigital-learning.com). It is of note that there is no single best assessment that is universally accepted by the field; therefore, many researchers use multiple assessments (Shapiro, Solari, & Petscher, 2008). In addition, there is no clear definition of the expected normative growth and its worrisome deviations (Silberglitt & Hintze, 2007). Moreover, the amount and quality of research on CBA and CBM vary dramatically (Wallace, Espin, McMaster, Deno, & Foegen, 2007) across grades (with a primary accent on elementary grades) and achievement domains (with a primary accent on reading). It is of interest that certain implementations of CBA have been criticized by the very developers of DT/A as providing quantitative but not qualitative information on students’ performance (Lidz, 1997).
Yet another important concept in RTI is the tier. Like DT/A, RTI is multilayered (or multitiered). There are multiple versions of RTI, varying by the number of tiers, from two to four (D. Fuchs, Mock, Morgan, & Young, 2003). One can draw a direct analogy between multiple layers of RTI and the cake model of DT/A (discussed earlier). The idea of layers or tiers in RTI is that each tier is different from the one before in a number of dimensions. First, there is a substantial correlation between the tier and the student’s need; the needier the student, the greater the number of tiers. Second, each subsequent tier assumes a more individualized, student-oriented approach. Third, similarly, the higher the tier, the more teacher directed, systematic, and explicit the pedagogy is. Fourth, each subsequent tier is characterized by more time and higher intensity. In other words, going back to the cake analogy, each subsequent tier is associated with layers of the cake that are thicker and richer.
It is assumed that the outcome of RTI is the delivery of proper intervention for a needy child. It is also assumed that there are some criteria that allow the child to complete the intervention at whatever tier it is delivered and return to the regular classroom. These criteria are typically referred to as dismissal criteria. However, these criteria have not been systematically defined; moreover, it has been shown that they are fluid (Vaughn, Linan-Thompson, & Hickman, 2003). In fact, it is possible that children who once met the criteria for dismissal from a particular tier of tutoring can again perform inadequately in a regular classroom and require additional RTI and tutoring. Similarly, in cases of late-emerging disabilities (e.g., reading difficulties pertaining to reading comprehension), early identification might be difficult (Compton, Fuchs, Fuchs, Elleman, & Gilbert, 2008). Thus, it is assumed that progress monitoring is continued even after the child returns to a regular classroom as well as for children in a regular classroom who might onset late in their school careers.
To summarize, there is a nontrivial amount of conceptual overlap between the terms used in DT/A and RTI. The overlap is mostly intuitive because many of the terms capturing both facets of DT/A and RTI are still imprecise. Yet there are at least two clearly overlapping sets of terms: One set relates to the idea of how assessment and instruction should be blended together ([test–teach–test]n for both DT/A and RTI, and [teach–test–teach]n for RTI) and the other to the notion of having two large groups of children, those who benefit from specific instruction (gainers for DT/A and responders for RTI) and those who do not (nongainers for DT/A and nonresponders for RTI). However, different terms and operationalizations are used to capture these ideas (the jingle-jangle phenomenon described earlier).
Process
Fundamentally, both DT/A and RTI are about growth and change. Thus, the assumption is that there are at least two metrics of change—one pertaining to the change in skill (i.e., a particular academic skill or cognitive function is expected to change) and the other to the change in rate of learning (i.e., the change in rate of acquisition of the targeted skill or function). In other words, because both approaches are based on an assumption that teaching is about making a difference, both approaches are designed to anticipate change and are equipped to capture the rate at which such change happens (i.e., to capture the rate of learning).
DT/A
In reviewing approaches to capturing change in DT/A, it is instrumental to return to the sandwich/cake metaphor.
In the sandwich format of DT/A, the pretest is typically operationalized as a conventional static test. The instruction has been conceptualized and delivered in many different forms; the common ground of these various types of instruction and intervention is to meet the individual needs of the child and to assure change in performance by achievement of some criteria, such as introducing a mental tool or ensuring cognitive modification (see the Terms and Theories section and Sternberg & Grigorenko, 2002, for review). These types of instruction are typically clinical in their orientation and are standardized loosely, if at all. Given the many methodological difficulties in using difference (gain) scores (Sternberg & Grigorenko, 2002), using the posttest performance indicators as indicators of learning potential (Guthke, 1982) is recommended.
The cake format assumes a much more interactive approach, where assistance is offered in smaller doses and more often, usually as soon as difficulties in solving a particular item or a set of items are identified. In other words, one DT/A session delivered in this paradigm has many mini-tests and mini-interventions. The modes of intervention within this type of DT/A are typically much more standardized (e.g., administered through structured hints; Guthke & Beckmann, 2000, 2003). This format is typically used in computerized DT/A, where computer algorithms are used to determine sequences and content of instructional support; items are presented in a homogeneous, standardized manner, and psychometric approaches with particular characteristics are used to quantify change (Beckmann, 2006; Embretson, 2000; Embretson & Prenovost, 2000; Guthke, Beckmann, & Dobat, 1997; Kalyuga & Sweller, 2005).
At this point, there is no preferred or recommended approach to quantifying change within DT/A. In addition, DT/A has not yet been applied on a large scale. In general, it exists in two only semicorrelating environments. The first pertains to clinical practices of psychologists and educators engaged with questions of diagnosis and placement. Some such approaches to DT/A have been well developed and established as highly attractive domains of clinical practice (e.g., the mediated learning approach developed by Feuerstein and his colleagues or the learning potential approach developed by Budoff). But like many traditions in psychotherapy, those approaches, although potentially powerful clinically, have not been proven effective empirically in the research literature. Thus, there are no systemic examples of implementation of DT/A outside those specific clinical practices, and the acceptability of the results of such assessment is determined by the degree to which schools are open to receiving comments on how particular students perform in situations of assisted assessments. The second type of environment is the research field. As a field of study, DT/A has generated many interesting results and ideas but still has not resolved the daunting questions of measuring and tracking change and relating the results of DT/A to educational changes.
These two DT/A environments are characterized by different processes. As with psychotherapies, clinicians can be trained on specific approaches to DT/A or particular modes of assessment (e.g., administering the LPAD probably requires as much training as administering and interpreting the Thematic Apperception Test) and can disseminate these approaches through their local practices. Clinical intuition, skills, and expertise mean a lot in this kind of dissemination. The relevance of DT/A’s outcomes for the lives of learners are censored by the traditions of the LEAs. The typical unit of practice in this approach is an individual child. The second, research-based environment of DT/A is focused primarily on comparing results of conventional testing with results of DT/A groups of children with a variety of special needs and investigating the predictive power of DT/A results as compared with the results from conventional testing. The DT/A process in this environment is characterized by standardizing and universalizing procedures and making them applicable to large samples of students (e.g., using computerization as in the LearnTest approaches).
RTI
As can be surmised from everything said so far, the process of RTI unfolds over a substantial period of time and requires the practitioner(s) to make a number of procedural decisions. Specifically, to establish and deliver an RTI-based identification of SLDs, the following chain of events (or some variation of it) should be in place.
The first decision pertains to the structure of the process, from the identification of RTI clientele in the context of regular education to transitioning qualified RTI clientele to special education. What is discussed here is one of a number of possible realizations of this process; clearly, other interpretations are possible.
Here it is assumed that the first step is in establishing the RTI clientele; as such this step is not considered to be a separate tier, but rather an attempt to establish the inclusion criterion or criteria in the RTI process. Thus, it is assumed that there is a screening device targeted at a particular domain of academic functioning (e.g., literacy or numeracy) that can be administered to all students to identify those at risk. In this model, it is understood that this screening is done at school entry and that children at risk for failure (i.e., students who do not have the age-appropriate skills in place) are identified as RTI clientele. This screening can be carried out either by the teacher (who should be trained to administer and score the device and to interpret the data) or in a centralized manner, through the school or the district (in this case children at risk can simply be flagged for the teacher when the data are processed). The identification of RTI clientele should result in two outcomes. First, both the teacher and the parents of the child who meets criteria for RTI after the screening should be informed. The teacher should have a clear plan for interacting with children at risk; Tier 1 of RTI (RTI 1) should be launched by the teacher as soon as the RTI clientele in his or her classroom have been identified. Second, the model should be clearly explained to the parents: The essence of flagging the child as at risk means that the child does not have the necessary skills in place for the acquisition of a given skill (e.g., literacy). The parameters of RTI 1 should be explained to the parents, but the parents should be aware of the possibility of tutoring the child or working with academic tutors in the community to help the school move the child out of the risk group.
The subsequent steps of this process, Steps 2 to 4, correspond to the tiers of RTI; thus, here a three-tier RTI model is assumed. One possibility in operationalizing RTI (see Note 6) is in conceptualizing its tiers as follows: RTI 1 as a classroom-level intervention, where the teacher and possibly some aides provide informed and evidence-based support to children identified as at risk and monitor their progress; RTI 2 is a small-group intervention, where aid is delivered by a specialist (e.g., a reading teacher) to a group of needy children; and RTI 3 is an individualized level of help, which can be delivered by both practitioners in schools and in the community (e.g., teachers, tutors, and trained and supervised paraprofessionals). It is important to note, however, that there are different models of realization of RTI throughout the country, varying from three to five tiers. These three tiers are essential components of RTI, and it is crucial to fill these tiers with effective and efficient pedagogical practices (Foorman, Breier, & Fletcher, 2003; Kamps & Greenwood, 2005).
The final step of this process is linked to decision making regarding the threshold for entry into special education or dismissal from RTI to a regular classroom. It is important to note that a decision on special education can occur only during or after the completion of RTI 3, whereas dismissal into the regular classroom can occur at any tier. Thus, the essence of RTI is in its preventive nature; RTI “should represent a research-validated form of preventative instruction with a format, nature, style, and intensity that can be implemented more widely than special education, by practitioners who are more readily available than special educators” (L. S. Fuchs & Fuchs, 2006, p. 39). It is also important to note that IDEA allows parents at any point of RTI to request a formal evaluation to determine eligibility for special education and that the fact of engagement with RTI itself cannot be used to deny or delay a formal evaluation for special education (NCLD, 2006).
The second decision is concerned with the criteria used for the (a) establishment of RTI clientele at Step 1; (b) definitions of nonresponders at Steps 2 to 4, who are to be moved from one tier to a subsequent tier (Compton, 2006); (c) identification of special needs at Step 4; and (d) dismissal decisions at any of the RTI tiers. At this point, there are no established norms, but the literature contains a number of examples and recommendations (e.g., see the earlier discussion in the Terms and Theories section).
Specifically, with regard to (a), a universal one-time screening is recommended upon school entry for all schools. However, the subsequent steps might vary. Schools might have set criteria for including children who score below a particular percentile on norm-referenced (typically, below the 25th percentile) or curriculum-based (typically, below the 15th percentile) assessment; every student who scores below these thresholds will receive RTI 1. Alternatively, schools might decide to monitor the performance of an at-risk group for a few weeks to exclude from RTI 1 those children who recover spontaneously, simply from exposure to systematic instruction; it has been shown that the number of these children is rather high, at least in the domain of reading acquisition (Compton et al., 2006).
With regard to (b), the literature contains three types of recommendations (see also the earlier discussion in Terms and Theories): (1) Identify nonresponders at the end of the intervention using the outcome status, (2) identify nonresponders during the intervention based on whether the rate of change or growth corresponds to what is expected or satisfies some criterion, and (3) a combination of (1) and (2). Technically, Option 1 is the most user-friendly; it requires reassessment at the end of the intervention and a qualitative decision. Options 2 and 3 call for higher levels of technical expertise related to data processing and data analyses; building growth curves requires multiple data points, and calculating slopes requires at least minimal data analysis skills. Thus, for Options 2 and/or 3 to be realistic for wide dissemination, they should probably be available in computer format or performed in a centralized manner through school or district offices or through community services.
The transition of the child to RTI 3 flags the student as a potential client for special education. The decision to offer special education services should be combined not only with detailed and explicit evaluation of attempts to overcome nonresponsiveness but also with broader evaluations to rule out other possible causes (e.g., behavioral disturbances, other developmental impairments, affective and motivational factors). In addition, the (c) criterion should be linked directly with the local district interpretation of special education and the position of the Planning and Placement Team and Individualized Education Program in this interpretation. Thus, at RTI 3 (the tier of special education), nonresponsiveness is no longer viewed narrowly (i.e., within academic domain only), as it was at RTI 1 and 2, but is possibly conceived as a symptom of an underlying syndrome that might or might not be specific to the targeted academic domain. At this time, however, the literature does not contain specific recommendations on whether RTI 3 should be conceived as an entry point into special education or as the mechanism for delivery of special education. All these issues will need to be further developed and specified as the RTI model is adopted and tested by practitioners.
The third decision pertains to the content of RTI itself. What kinds of interventions should be waived into the process, and with what measures should student progress be monitored? Today’s literature contains two major models, an expert opinion–based model and a protocol-based (manualized) one. The expert-based opinion requires the availability of highly skilled and trained professionals who can accept responsibility for making individualized decisions for students in the RTI process. The protocol-based approach assumes the availability of well-developed materials that can be used off the shelf, with minimal modifications. Both approaches have their pluses and minuses, ranging from the distribution of false positives and negatives in the identification of needy children to financial costs. As the RTI approach develops, there will be more and more models and more and more successful examples of its implementation. For now, the recommendation in the literature is to use both models, blending their elements together, and, in implementing them, to rely on the expertise available not only through school districts but also through the community (i.e., local tutorial and diagnostic services and institutions of higher education).
Thus, the old debate from the domain of DT/A literature on the relative merits of clinical individualized and standardized approaches to DT/A, and especially to delivering intervention within DT/A (Grigorenko & Sternberg, 1998), is also central to the field of RTI (D. Fuchs et al., 2007). In a nutshell, the debate can be boiled down to the “we know that it works, let’s use it” versus “it has some promise, but it’s understudied” contradiction. As has been previously pointed out, concerns with lack of reliability and validity of indicators of change appear to be primarily relevant to research aspects of DT/A and not so relevant to clinical psychologists delivering and using DT/A in their practice. The latter appear to be content with promoting and accomplishing the change and are not so concerned with relevant measurement issues. Similarly, proponents of RTI, especially those educators who have demonstrated in their work with their LEAs the values of this concept, also appear to be less anxious about the accumulation of research evidence for or against RTI (e.g., see discussion in D. Fuchs et al., 2007).
Supportive Evidence
Is there any supportive evidence for the concepts of DT/A and RTI, and if so, how strong is it?
DT/A
The literature on DT/A is extensive and includes a number of descriptive reviews and quantitative syntheses of predictive validity. Just to illustrate with the most recent applications of DT/A, DT/A paradigms have been used in clinical practice (Glaspey & Stoel-Gammon, 2005; Haywood & Lidz, 2007) and research (Hessels-Schlatter, 2002; Kirkwood, Weiler, Holms Bernstein, Forbes, & Waber, 2001), educational practice (Bosma & Resing, 2006; Koomen, Verschueren, & Thijs, 2006; Kozulin & Garb, 2004; Zaaiman, van der Flier, & Thijs, 2001) and research (Büchel, 2006; Büchel et al., 1997; Guterman, 2002; Meijer & Elshout, 2001; Meijer & Riemersma, 2002; Tzuriel & Shamir, 2002), health psychology (Fernández-Ballesteros, Zamarrón, & Tàrraga, 2005; Haywood & Miller, 2003), and industrial research (e.g., Embretson, 2000), among other domains.
As for conceptual summaries of the DT/A literature, the general belief is that DT/A provides unique information in addition to that extrapolated by conventional testing, but it should not be used in place of conventional testing exclusively. Rather, it should be used in combination with it. As for quantitative syntheses, the conclusions are that DT/A demonstrates superior predictive validity to conventional testing of levels of student abilities, depending on type of DT/A administration and domain (D. Fuchs et al., 2007; Swanson & Lussier, 2001). In addition, there is a sizable body of empirical evidence supporting the assumption that assisted performance registered in the context of DT/A can provide evidence of learning potential to a stronger and crisper degree than performance in a conventional unassisted context (Büchel et al., 1997; Fabio, 2005; Grigorenko et al., 2006; Hessels, 1996, 1997, 2000; Schlatter & Büchel, 2000; Sternberg et al., 2002).
RTI
The empirical literature that uses RTI explicitly is, at this point, admittedly (Gersten & Dimino, 2006) rather limited and mixed in representing research studies and evidence from LEAs (e.g., Compton et al., 2006; Kamps & Greenwood, 2005; Kamps et al., 2003; Marston, Muyskens, Lau, & Canter, 2003; O’Connor, Fulmer, Harty, & Bell, 2005; VanDerHeyden, Witt, & Gilbertson, 2007). However, there is a bulk of research literature that is typically cited in indirect support of RTI (e.g., Torgesen et al., 2001; Vaughn et al., 2003; Vellutino, Scanlon, & Lyon, 2000; Vellutino, Scanlon, & Tanzman, 1998) or its various facets (e.g., Gravois & Rosenfield, 2002). In addition, a new wave of research aims to provide a careful research-based evaluation of RTI-based approaches (e.g., Scanlon, Gelzheiser, Vellutino, Schatschneider, & Sweeney, 2008; Vaughn et al., 2008).
When considered collectively, this evidence offers appealing support for RTI, but then again, the concept has not been around long enough to be tested by its opponents as well as its proponents. In addition, although the concept might have gained initial support, there are still many empirical details to work out to transform this initial evidence into a respected body of literature. There are many serious issues to be researched (e.g., see Terms and Theories above), among which is the fundamental issue of heterogeneity of special needs and their etiological bases among children. Although RTI procedures might work for the majority of children, there is an undifferentiated minority for which they do not work. The question then is what size of such a group is permissible?
The U.S. education system has been using problem-solving approaches to instruction for many years. These approaches are known under such names as the teacher assistance team model, prereferral intervention model, instructional support team model, school-based consultation team model, problem-solving model, and many others (ASHA et al., 2006). A number of diverse school district–based implementations of RTI can be found in the literature on the Heartland Agency Model in Iowa (Ikeda, Tilly, Stumme, Volmer, & Allison, 1996) and, as cited in Marston (2005), Ohio’s Intervention-Based Assessment (Telzrow, McNamara, & Hollinger, 2000), Pennsylvania’s Instructional Support Team (Kovaleski & Glew, 2006), Minneapolis Public Schools’ Problem-Solving Model (Marston et al., 2003), and Vail School District of Tuscon, Arizona (VanDerHeyden et al., 2007). The literature also contains a number of illustrations, although still limited, of successful implementations of RTI in controlled (Marston, 2005; O’Connor, Harty, & Fulmer, 2005) and real-life district settings (Marston, 2005; Moore-Brown, Montgomery, Bielinski, & Shubin, 2005; VanDerHeyden et al., 2007). One important consideration is that at this point, RTI refers to a particular approach to decision making related to special education (Christ, Burns, & Ysseldyke, 2005). In fact, sets of procedures, models, and forms for different implementations of RTI are just being developed and validated (Barth et al., 2008; Compton et al., 2006; Compton et al., 2008; Scanlon et al., 2008; VanDerHeyden, Witt, & Barnett, 2005; Vaughn et al., 2008). And although RTI as a collective way of systematizing best practices in the field seems to be associated with positive effects (Burns, Appleton, & Stehouwer, 2005), it is far from being a recognized, rigorously evaluated, evidence-based practice (D. Fuchs & Deshler, 2007). “Emerging from largely grass roots efforts … RTI may have many futures” (VanDerHeyden et al., 2007, p. 254), and these possible future outcomes are being shaped by the research and practice in the field now.
To summarize, at this point the data are somewhat incomparable for DT/A and RTI. DT/A, primarily because of its longer life span, has accumulated a substantial amount of evidence in its favor in a variety of domains, situations, and countries. The entry of RTI into the world of empirical data is rather untraditional, being predated by districtwide models using this concept in their everyday practice prior to the build-up of comprehensive research evidence attesting to the validity of the concept. However, at this point, it is difficult to synthesize this district-based evidence because (a) only a limited amount of data has been published, (b) the models of implementation are very diverse, and (c) there is limited information concerning fidelity of implementation of RTI-based models within different districts. Yet there is a great deal of enthusiasm with regard to those few published grassroots applications of RTI. And although enthusiasm does not and cannot substitute for research evidence, the energy of enthusiasm is always helpful for the development of any field. Second, although many DT/A and RTI supporters are critical of traditional psychometric approaches and standardized testing, there is still an expectation in the field for assessments used within DT/A and RTI to generate valid and reliable data. Supporters of both DT/A and RTI agree that much still needs to be done to understand the psychometric properties of both DT/A and RTI (D. Fuchs et al., 2007; Grigorenko & Sternberg, 1998) and to develop tools that are usable not only in the context of clinical services but also in schools.
Use: Readiness of Educators and School Psychologists
The individualized approach to each struggling child, considered philosophically fundamental to both DT/A and RTI, is both a strength and a weakness of each approach. The balance between those strengths and weaknesses is reflected in the fact that both approaches, internally, are far from homogeneous and are characterized by tensions and disagreements between different realizations of the general ideas underlying DT/A and RTI.
Their strength is that both approaches have the potential to provide the information that is most useful to teachers—the information relevant for profiling a child’s abilities, skills, and educational needs and for devising pedagogical strategies in response to such a profile (Freeman & Miller, 2001). Some proponents of DT/A say that this type of assessment indeed has already delivered on its potential and provides a context and a framework for pedagogical approaches that can be delivered both through classroom lessons and direct intervention (Gillam, Peña, & Miller, 1999). One substantial difference between DT/A and RTI is in the scale on which these techniques were conceived, with RTI intended to be applicable to large numbers of kids (no DT/A approaches have ever aimed to be upscaled to systemic levels). Thus, individualization at the DT/A level appears to be more achievable because the model allows more work with individual cases from the very beginning. RTI assumes the presence of a number of steps before it allows for individualization. This RTI-based transition from many children to one child is not well documented in the literature yet. Again, given the relative paucity of publications on empirical implementations of RTI, its potential is yet to be realized.
The weakness is that any individualization requires expert-level skill. As a tailor requires more expertise to create a unique outfit than to sew on an assembly line, a tester or educator needs higher levels of skill to understand the psychological theories and processes relevant to testing or teaching every individual child. This higher level of expertise has been referred to as the ability to problem solve while dealing with atypical children (Deno, 1995). In fact, such a concept of teacher as problem solver has been advocated as central to pre- and in-service teacher training (Ryba & Brown, 1994). This position can be taken to the extreme: DT/A requires the active mediation of performance by testers, and teaching involves the active modification of students by teachers. Correspondingly, DT/A can be equated with teaching; thus, “all teachers (including parents, tutors, and examiners), in the act of teaching,” can be viewed as deliverers of DT/A (Gerber, 2001, p. 193). But as we all know, there is a huge difference between “can be” or “should be” and “is.” So how does the attempt to individualize assessment and education central to both DT/A and RTI translate into their use?
DT/A
In the United Kingdom, for example, based on survey data, there is a general positive attitude of educators and school psychologists toward DT/A, but this attitude is constrained by the understanding that DT/A requires significant training, commitment, and acceptance on the part of educational authorities (Deutsch & Reynolds, 2000). These limitations help explain the somewhat infrequent use of DT/A: One third of educators and psychologists familiar with the concept of DT/A report using the methodology, but only rarely because of (a) a lack of skill and knowledge in administering DT/A and (b) its time constraints (Haney & Evans, 1999). Thus, it is not surprising that DT/A is “being used in an incidental way” (Freeman & Miller, 2001, p. 6). The general attitude toward DT/A is that it is potentially but not actually useful.
There are many proponents of standardizing the procedures within DT/A (Hamers et al., 1994). Such standardization could maximize psychologists’ and teachers’ comfort in delivering DT/A. In fact, there are some empirical examples that such standardization can be effective in classrooms (Birney et al., 2007). Yet the majority of DT/A is still very much about tailoring the assessment tools and approaches to the needs of every child. And such individualization, as mentioned earlier, requires vast amounts of expertise and experience.
RTI
Similarly, the level of expertise required for implementation and dissemination of RTI is one of its limitations. For example, as mentioned earlier, one districtwide realization of an RTI-like approach is centered in Heartland, Iowa (Ikeda & Gustafson, 2002). This approach relies on four levels of educational support for struggling students: (1) a teacher-parent team effort, (2) a teacher-teacher school-based professional support group (the so-called Building Assistance Team) effort, (3) a teacher-district team effort, and (4) special education. The system is designed to invoke the same process: (a) determination of the problems, (b) cause analyses, (c) design of the relevant intervention, (d) implementation of the intervention, (e) student progress monitoring, (f) in-progress modification of the intervention, (g) evaluation of the effectiveness of the intervention, and (h) determination of future actions. Clearly, this approach calls for a high level of expertise among educators and affiliated practitioners. Specifically, the staff involved with different levels of interventions should (a) be qualified to make diagnostic and clinical judgments; (b) be knowledgeable of and skilled in applying different evidence-based interventions; (c) be able to monitor effectiveness of the approach for a given child; and (d) have time and opportunity to record, catalogue, and justify professional decisions throughout the process. Evidently, this approach implies a considerable level of expertise in both assessment and intervention and assumes that the professionals involved are able to carry out this problem-solving cycle from Levels 1 to 4 in an uninterrupted manner.
As with DT/A, an alternative way to engage expert professionals is to develop and disseminate multiple prepackaged or manualized treatment protocols that could be used in a standard way. In this context, the process is different. First, it assumes the presence of evidence-based materials. Second, it assumes that the materials are available for both instruction and assessment at multiple tiers. Third, the implementation starts with a trial of fixed duration. If the protocol appears successful, the full course of treatment is delivered and the child is returned to the regular classroom. If the protocol appears unsuccessful, either it is replaced with a different protocol or the child is moved to a subsequent tier. The literature contains selective illustrations of this manualized approach, although they are still few and far between (Vellutino et al., 1996). Currently, many organizations supporting professionals in the field of general and special education and related services indicate the need to develop new assessments and materials and to provide significant professional development to support the implementation and use of RTI and related practices (ASHA et al., 2006).
Currently, there are examples of successful implementation of the RTI in a number of school districts throughout the United States. However, these implementations are often distinguished by the specifics characteristics of leadership, commitment, and corresponding training (D. Fuchs & Deshler, 2007) and might not be as easy to implement in just any school district. A survey of practicing school psychologists indicated a 75% endorsement rate for RTI, but a substantial component of this endorsement (61.9%) is attributable to the promotion of complementarity between RTI and cognitive and IQ-discrepancy models, at least for specific reading disability (Machek & Nelson, 2007).
To summarize, one more shared feature of DT/A and RTI is an expectation that its deliverers will have a rather nontrivial amount of expertise, experience, and support during delivery. These expectations are to be applied to educators working with children with special needs—a population where examples of the use of both DT/A and RTI are numerous (Cioffi & Carney, 1997; van der Aalsvoort & Harinck, 2001). One possible solution to this demand for expertise is in manualizing and standardizing approaches to the delivery of both DT/A and RTI. However, this solution has been noted as trading one beast (standardized testing) for another (standardized instruction; Gerber, 2005).
Blending the Two: Case Studies—DT/A or RTI?
Until now, the discussion has revolved around DT/A and RTI as overlapping but separate concepts. In this section, a number of examples of substitution of these concepts are provided.
First, there are numerous instances of overlapping terminology, in which terms associated with DT/A penetrate the domain of RTI and vice versa. For example, such uses of the concept of DT/A as “DA yields information about learner responsiveness” (Gillam et al., 1999, p. 37) and DT/A “is a method of conducting trial lessons” (Humphries, Krogh, & McKay, 2001, p. 174) make DT/A, at least superficially, indistinguishable from RTI. In addition, proponents of one of the two approaches have even borrowed the language used in one domain to define the other. For example, Embretson (2000) proposed a general definition of DT/A, stating that “in a standardized dynamic test, the responsiveness of an examinee to systematically and objectively changing testing conditions is measured” (Embretson, 2000, p. 507).
Second, there are overlaps in objectives and functions of DT/A and RTI. For example, in discussing functions of DT/A, Elliott and Lauchlan (1997) listed the following objectives: predict educational success, enable judgments to be made about the appropriateness of special education placement, help gain understanding profiles of students’ strengths and weaknesses, and evaluate students’ modifiability. Ultimately, DT/A is viewed as a means to recommend appropriate versus inappropriate approaches to teaching in increasingly inclusive educational settings (Elliott, 2000), whether the child is classified as having SLD or not. Although DT/A has already provided quantitative evidence suggesting that it results in “more valid” classifications of SLD (Swanson & Howard, 2005), the challenge is to translate classifying or diagnostic information into the most appropriate teaching practices.
Now, this maps itself almost perfectly on to what RTI tries to accomplish. Also, DT/A has been used to address inequalities in the referrals to special education (Harry & Klingner, 2006) for children from culturally and linguistically diverse backgrounds (Anderson, 2001; Hwa-Froelich & Matsuo, 2005; Jacobs, 2001; Jacobs & Coufal, 2001; Peña, 2000; Peña, Iglesias, & Lidz, 2001; Peña & Quinn, 1997; Roseberry & Connell, 1991; Ukrainetz, Harpell, Walsh, & Coyle, 2000). But this is exactly how RTI entered the field in the 1980s!
Third, there is a tacit understanding that these two concepts overlap substantially. For example, a review of DT/A literature was criticized for not including work on RTI (Schulte, 2004).
Fourth, there are examples of DT/A leaving the clinical realm; in a number of instances DT/A concepts such as ZPD are viewed as multilayer structures of instruction and assessment to be used in the context of general education (e.g., teaching of foreign languages; Kinginger, 2002).
Finally, there are a few studies that explicitly or implicitly blended DT/A and RTI. For example, one study provides an illustration of a yearlong tutorial (from the end of first grade to the end of second grade, with 36.5 sessions on average) in reading and writing (Abbott et al., 1997). This tutorial was administered to 16 children referred for severe reading problems; their progress was monitored with growth curves. The children were differentiated as responders (defined through change in their growth being greater than chance) and nonresponders (no difference in growth from change) to treatment for all trained skills; the pattern of results differed for different clusters of skills (i.e., children who responded well to training on orthographic skills might not have responded to training on phonological skills, and vice versa). As a group, the children showed significant (greater than chance) gains on indicators of orthographic and phonological coding, decoding of real and pseudowords, reading comprehension, letter automaticity, and spelling, but not for written composition. The data obtained through these tutorials, conceptualized as prolonged dynamic assessment, were used to generate individual pedagogical recommendations and, in some cases, prevent children from receiving the diagnosis of SLD prematurely. Thus, can one say whether this a DT/A or an RTI study? In addition, there are open statements with regard to the role of DT/A in RTI (D. Fuchs et al., 2007).
In short, there are quite a few instances when the concepts have been used interchangeably. In addition, there are many examples the literature in which these concepts are implicitly equated.
Conclusion
In reviews of DT/A, it has been stated that the methodology has promise but needs time and effort to grow (Grigorenko & Sternberg, 1998). Comments on RTI have noted that it “has the most potential,” “but currently falls short” (Dean, Burns, Grialou, & Varro, 2006, p. 157). In short, both approaches are still developing and crystallizing; there is an ongoing attempt to bring both approaches into the category of “evidence based,” but how much evidence is enough and what should the evidence be, specifically?
In closing, I would like to make two observations. First, DT/A and RTI appear to be two sides of the same coin. Both concepts have so many overlapping facets and are so often substituted for each other that one wonders whether the field needs two concepts to signify the same construct. Yet these two concepts originated differently and within different traditions, and cancellation or substitution of one of them might result in resistance. So if traditions are unbreakable, the field should at least acknowledge its duplication of effort.
Second, both DT/A and RTI (or a single construct that underlies both concepts) put the accent on services rather than diagnostics (Staskowski & Rivera, 2005). In fact, the concept of RTI is only loosely related to any theories and is fundamentally embedded in practice. DT/A originated as embedded in particular theoretical perspectives but has outgrown these limitations and in its current realization, generally speaking, is theory fair (or theory free). These features of RTI and DT/A meet a particular mode of modern knowledge production that is distinguished by its occurrence within the applied context and its embrace of social accountability and reflexivity (Gibbons et al., 1994; Nowotny, Scott, & Gibbons, 2001). Both DT/A and RTI arose in response to applied tasks and are associated with important changes that need to be made in educational systems around the world. This mode of knowledge acquisition is developing in response to changes in modern society (Gibbons et al., 1994). And it seems that not knowingly, this is the mode of knowledge acquisition referenced in yet another famous statement credited to Kurt Lewin: “The best way to understand something is to try to change it” (as cited in Greenwood & Levin, 1998, p. 19). Both DT/A and RTI are about change; “change” conceptually (a change within a child) and applicationally (a change of established practices of dealing with children with special needs). But can we accept them as evidence-based scientific concepts?
Science, scientific research, and scientific concepts are characterized by both epistemic (e.g., objectivity, empirical observations, careful experimental manipulation, logical consistencies of theories, and use of particular specialized linguistic and cognitive rituals) and nonepistemic (e.g., usability, worthiness, and applicability) values (Brown, 2001; Crosby, Clayton, Downing, & Iyer, 2004). Although there is general agreement that any theory should be influenced by epistemic values, there is much debate with regard to the importance of nonepistemic values and whether science should be freed (Kendler, 2004) from them or not (Fortun & Bernstein, 1998).
Given the content, intent, and history of DT/A and RTI, it can be stated that their nonepistemic values are transparent and well understood. Yet both DT/A and RTI need much supportive evidence to enhance the epistemic value of their underlying construct. And maybe such an enhancement would happen more quickly if the somewhat artificial border between the two concepts were eliminated, or at least lifted. After all, they are two sides of the same coin.
Acknowledgments
Preparation of this article was supported by Grant R206R00001 under the Javits Act Program as administered by the Institute for Educational Sciences, U.S. Department of Education, and by Grant P50 HD052120 as administered by the U.S. National Institutes of Health. Grantees undertaking such projects are encouraged to express freely their professional judgment. This article, therefore, does not necessarily represent the position of the Institute for Educational Sciences or the U.S. Department of Education; no official endorsement should be inferred. I express my gratitude to Drs. Julian Elliott, Doug Fuchs, Linda Jarvin, and Ida Jeltova and anonymous reviewers for this journal for their helpful comments and to Ms. Robyn Rissman for her editorial assistance.
Biography
Elena L. Grigorenko received her PhD in general psychology from Moscow State University, Russia, in 1990, and her PhD in developmental psychology and genetics from Yale University in 1996. Currently, she is an associate professor of child studies, psychology, and epidemiology and public health at Yale and an adjunct professor of psychology at Columbia University and Moscow State University. She has published more than 250 peer-reviewed articles, book chapters, and books.
Footnotes
The abbreviation DT/A is used here to refer to both dynamic testing (DT, a narrower category, pointing to cognitive testing administered dynamically) and dynamic assessment (DA, a broader category, which includes not only cognitive testing but also evaluations of behavior and other emotions).
The summary provided here is by necessity abbreviated dramatically and cannot do justice to the large field of DT/A.
The word “instruction” is also used (Vaughn & Fuchs, 2003).
Although the field predominantly applies DT/A to children with disability (Tymms & Elliott, 2006), DT/A has also been used with gifted children (Kanevsky, 1995; Kanevsky & Geake, 2004; Lidz & Elliott, 2006).
These definitions were coformulated with Dr. Ida Jeltova.
In some implementations, the three tiers of RTI are referred to as the benchmark level (RTI 1), strategic level (RTI 2), and intensive level (RTI 3; Center for Promoting Research to Practice, n.d.).
References
- Abbott SP, Reed E, Abbott RD, Berninger VW. Year-long balanced reading/writing tutorial: A design experiment used for dynamic assessment. Learning Disability Quarterly. 1997;20:249–263. [Google Scholar]
- Allal L, Ducrey GP. Assessment of—or in—the zone of proximal development. Learning and Instruction. 2000;10:137–152. [Google Scholar]
- American Speech-Language-Hearing Association, Council of Administrators of Special Education, Council for Exceptional Children, Council for Learning Disabilities, Division for Learning Disabilities, International Dyslexia Association, et al. New roles in Response to intervention: Creating success for schools and children. 2006 Nov; Available from http://www.inter-dys.org.
- Anderson RT. Learning an invented inflectional morpheme in Spanish by children with typical language skills and with specific language impairment (SLI) International Journal of Language & Communication Disorders. 2001;36:1–19. doi: 10.1080/13682820118926. [DOI] [PubMed] [Google Scholar]
- Barrera M. Roles of definitional and assessment models in the identification of new or second language learners of English for special education. Journal of Learning Disabilities. 2006;39:142–156. doi: 10.1177/00222194060390020301. [DOI] [PubMed] [Google Scholar]
- Barth A, Stuebing KK, Anthony JL, Denton CA, Mathes PG, Fletcher JM, et al. Agreement among response to intervention criteria for identifying responder status. Learning and Individual Differences. 2008;18:296–307. doi: 10.1016/j.lindif.2008.04.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bateman B. An educational view of a diagnostic approach to learning disorders. In: Hellmuth J, editor. Learning disorders. Vol. 1. Seattle, WA: Special Child Publications; 1965. pp. 219–239. [Google Scholar]
- Beckmann JF. Superiority: Always and everywhere? On some misconceptions in the validation of dynamic testing. Educational and Child Psychology. 2006;23:35–49. [Google Scholar]
- Berninger VW, Abbott RD. Redefining learning disabilities: Moving beyond aptitude-achievement discrepancies to failure to respond to validated treatment protocols. In: Lyon GR, editor. Frames of reference for the assessment of learning disabilities: New views on measurement issues. Baltimore: Paul Brookes; 1994. pp. 163–183. [Google Scholar]
- Bosma T, Resing WCM. Dynamic assessment and a reversal task: A contribution to needs-based assessment. Educational and Child Psychology. 2006;23:81–98. [Google Scholar]
- Brown JR. Who rules in science? An opinionated guide to the wars. Cambridge, MA: Harvard University Press; 2001. [Google Scholar]
- Büchel FP. Analogical reasoning in students with moderate intellectual disability: Reasoning capacity limitations or memory overload? Educational and Child Psychology. 2006;23:61–80. [Google Scholar]
- Büchel FP, Schlatter C, Scharnhorst U. Training and assessment of analogical reasoning in students with severe learning difficulties. Educational and Child Psychology. 1997;14:109–120. [Google Scholar]
- Budoff M. Measures for assessing learning potential. In: Lidz C, editor. Dynamic assessment: An interactional approach to evaluating learning potential. New York: Guilford; 1987a. pp. 173–195. [Google Scholar]
- Budoff M. The validity of learning potential assessment. In: Lidz C, editor. Dynamic assessment: An interactional approach to evaluating learning potential. New York: Guilford; 1987b. pp. 53–81. [Google Scholar]
- Burns MK, Appleton JJ, Stehouwer JD. Meta-analytic review of Responsiveness-to-Intervention research: Examining field-based and research-implemented models. Journal of Psychoeducational Assessment. 2005;23:381–394. [Google Scholar]
- Camilleri B. Dynamic assessment and intervention: Improving children’s narrative abilities. International Journal of Language & Communication Disorders. 2005;40:240–242. [Google Scholar]
- Campione JC. Assisted assessment: A taxonomy of approaches and an outline of strengths and weaknesses. Journal of Learning Disabilities. 1989;22:151–165. doi: 10.1177/002221948902200303. [DOI] [PubMed] [Google Scholar]
- Campione JC, Brown A, Ferrara RA, Jones RS, Steinberg E. Breakdowns in flexible use of information: Intelligence-related differences in transfer following equivalent learning performance. Intelligence. 1985;9:297–315. [Google Scholar]
- Carlson JS, Wiedl KH. Principles of dynamic assessment: The application of a specific model. Learning and Individual Differences. 1992;4:153–166. [Google Scholar]
- Carlson JS, Wiedl KH. The validity of dynamic assessment. In: Lidz C, Elliott JG, editors. Dynamic assessment: Prevailing models and applications. New York: Elsevier; 2000. pp. 681–712. [Google Scholar]
- Cartiella C. A parent’s guide to response-to-intervention. 2006 Retrieved January 6, 2007, from http://www.LD.org.
- Center for Promoting Research to Practice. Project MP3: What is RTI? Bethlehem, PA: Lehigh University; n.d. Available from http://www.lehigh.edu/collegeofeducation/mp3/rti/rti.htm. [Google Scholar]
- Chak A. Adult sensitivity to children’s learning in the zone of proximal development. Journal for the Theory of Social Behaviour. 2001;31:383–395. [Google Scholar]
- Christ T, Burns MK, Ysseldyke JE. Conceptual confusion within response-to-intervention vernacular: Clarifying meaningful differences. Communique. 2005;34:6–8. [Google Scholar]
- Cioffi G, Carney JJ. Dynamic assessment of composing abilities in children with learning disabilities. Educational Assessment. 1997;4:175–202. [Google Scholar]
- Compton DL. How should “unresponsiveness” to secondary intervention be operationalized? It is all about the nudge. Journal of Learning Disabilities. 2006;39:170–173. doi: 10.1177/00222194060390020501. [DOI] [PubMed] [Google Scholar]
- Compton DL, Fuchs D, Fuchs LS, Bryant JD. Selecting at-risk readers in first grade for early intervention: A two-year longitudinal study of decision rules and procedures. Journal of Educational Psychology. 2006;98:394–409. [Google Scholar]
- Compton DL, Fuchs D, Fuchs LS, Elleman AM, Gilbert JK. Tracking children who fly below the radar: Latent transition modeling of students with late-emerging reading disability. Learning and Individual Differences. 2008;18:329–337. [Google Scholar]
- Crosby FJ, Clayton S, Downing R, Iyer A. Values and science. American Psychologist. 2004;59:125–126. [Google Scholar]
- Danielson L, Doolittle J, Bradley R. Past accomplishments and future challenges. Learning Disability Quarterly. 2005;28:137–139. [Google Scholar]
- de Leeuw L. Teaching problem solving: An ATI study of the effects of teaching algorithmic and heuristic solution methods. Instructional Science. 1983;12:1–48. [Google Scholar]
- de Leeuw L, van Daalen H, Beishuizen JJ. Problem solving and individual differences: Adaptation to and assessment of student characteristics by computer based instruction. In: Corte ED, Lodewijks H, Parmentier R, editors. Learning & instruction: European research in an international context. Vol. 1. Elmsford, NY: Pergamon; 1987. pp. 99–110. [Google Scholar]
- Dean VJ, Burns MK, Grialou T, Varro PJ. Comparison of ecological validity of learning disabilities diagnostic models. Psychology in the Schools. 2006;43:157–168. [Google Scholar]
- Deno SL. Curriculum-based measurement: The emerging alternative. Exceptional Children. 1995;52:219–232. doi: 10.1177/001440298505200303. [DOI] [PubMed] [Google Scholar]
- Deutsch R, Reynolds Y. The use of dynamic assessment by educational psychologists in the UK. Educational Psychology in Practice. 2000;16:311–331. [Google Scholar]
- Ehren BJ, Nelson NW. The responsiveness to intervention approach and language impairment. Topics in Language Disorders. 2005;25:120–131. [Google Scholar]
- Elliott JG. Dynamic assessment in educational contexts: Purpose and promise. In: Lidz C, Elliott JG, editors. Dynamic assessment: Prevailing models and applications. New York: Elsevier; 2000. pp. 713–740. [Google Scholar]
- Elliott JG. Dynamic assessment in educational settings: Realising potential. Educational Review. 2003;55:15–32. [Google Scholar]
- Elliott JG, Lauchlan F. Assessing potential—The search for the philosopher’s stone? Educational and Child Psychology. 1997;14:6–16. [Google Scholar]
- Elliott JG, Lidz C, Shaughnessy MF. An interview with Joe Elliott and Carol Lidz. North American Journal of Psychology. 2004;6:349–360. [Google Scholar]
- Embretson SE. Multidimensional measurement from dynamic tests: Abstract reasoning under stress. Multivariate Behavioral Research. 2000;35:505–542. doi: 10.1207/S15327906MBR3504_05. [DOI] [PubMed] [Google Scholar]
- Embretson SE, Prenovost LK. Dynamic cognitive testing: What kind of information is gained by measuring response time and modifiability? Educational and Psychological Measurement. 2000;60:837–863. [Google Scholar]
- Fabio RA. Dynamic assessment of intelligence is a better reply to adaptive behavior and cognitive plasticity. Journal of General Psychology. 2005;132:41–64. doi: 10.3200/GENP.132.1.41-66. [DOI] [PubMed] [Google Scholar]
- Fernández-Ballesteros R, Zamarrón MD, Tàrraga L. Learning potential: A new method for assessing cognitive impairment. International Psychogeriatrics. 2005;17:119–128. doi: 10.1017/s1041610205000992. [DOI] [PubMed] [Google Scholar]
- Fernandez M, Wegerif R, Mercer N, Rojas-Drummond S. Re-conceptualizing “scaffolding” and the zone of proximal development in the context of symmetrical collaborative learning. Journal of Classroom Interaction. 2001–2002;36–37:40–54. [Google Scholar]
- Feuerstein R, Feuerstein RS, Falik LH, Rand Y. The dynamic assessment of cognitive modifiability: The Learning Propensity Assessment Device: Theory, instruments and techniques. Revised and expanded edition of The Dynamic Assessment of Retarded Performers. Jerusalem, Israel: ICELP Press; 2002. [Google Scholar]
- Fletcher JM, Lyon GR, Fuchs LS, Barnes MA. Learning disabilities. New York: Guilford; 2007. [Google Scholar]
- Fletcher JM, Shaywitz SE, Shankweiler DP, Katz L, Liberman IY, Stuebing KK, et al. Cognitive profiles of reading disability: Comparisons of discrepancy and low achievement definitions. Journal of Educational Psychology. 1994;86:6–23. [Google Scholar]
- Foorman BR, Breier JI, Fletcher JM. Interventions aimed at improving reading success: An evidence-based approach. Developmental Neuropsychology. 2003;24:613–639. doi: 10.1080/87565641.2003.9651913. [DOI] [PubMed] [Google Scholar]
- Fortun M, Bernstein HJ. Muddling through: Pursuing science and truths in the twenty-first century. Washington, DC: Counterpoint; 1998. [Google Scholar]
- Freeman L, Miller A. Norm-references, criterion-referenced, and dynamic assessment: What exactly is the point? Educational Psychology in Practice. 2001;17:3–16. [Google Scholar]
- Fuchs D, Deshler DD. What we need to know about responsiveness to intervention (and shouldn’t be afraid to ask) Learning Disabilities Research & Practice. 2007;22:129–136. [Google Scholar]
- Fuchs D, Fuchs LS. Introduction to response to intervention: What, why, and how valid is it? Reading Research Quarterly. 2006;41:93–99. [Google Scholar]
- Fuchs D, Fuchs LS, Compton DL. Identifying reading disabilities by responsiveness-to-instruction: Specifying measures and criteria. Learning Disability Quarterly. 2006;27:216–227. [Google Scholar]
- Fuchs D, Fuchs LS, Compton DL, Bouton B, Caffrey E, Hill L. Thinking outside the box about RTI: A role for dynamic assessment? 2007. Unpublished manuscript. [Google Scholar]
- Fuchs D, Mock D, Morgan PL, Young CL. Responsiveness-to-intervention: Definitions, evidence, and implications for the learning disabilities construct. Learning Disabilities Research & Practice. 2003;18:157–171. [Google Scholar]
- Fuchs LS, Fuchs D. Treatment validity: A unifying concept for reconceptualizing the identification of learning disabilities. Learning Disabilities Research & Practice. 1998;13:204–219. [Google Scholar]
- Fuchs LS, Fuchs D. Identifying learning disabilities with RTI. Perspectives. 2006;32:39–43. [Google Scholar]
- Gerber MM. Dynamic assessment for students with learning disabilities: Lessons in theory and design. In: Lidz C, Elliott JG, editors. Dynamic assessment: Prevailing models and applications. New York: Elsevier; 2000. pp. 263–292. [Google Scholar]
- Gerber MM. All teachers are dynamic tests. Issues in Education. 2001;7:193–200. [Google Scholar]
- Gerber MM. All teachers are dynamic tests: Issues in education. Contributions From Educational Psychology. 2002;7:193–200. [Google Scholar]
- Gerber MM. Teachers are still the test: Limitations of response to instruction strategies for identifying children with learning disabilities. Journal of Learning Disabilities. 2005;38:516–524. doi: 10.1177/00222194050380060701. [DOI] [PubMed] [Google Scholar]
- Gersten R, Dimino JA. RTI (response to intervention): Rethinking special education for students with reading difficulties (yet again) Reading Research Quarterly. 2006;41:99–108. [Google Scholar]
- Gibbons M, Limoges H, Nowotny H, Schwartzman S, Scott P, Trow M. The new production of knowledge: The dynamics of science and research in contemporary societies. London: Sage; 1994. [Google Scholar]
- Gillam RB, Peña ED, Miller L. Dynamic assessment of narrative and expository discourse. Topics in Language Disorders. 1999;20:33–47. [Google Scholar]
- Glaspey AM, Stoel-Gammon C. Dynamic assessment of narrative and expository discourse. Topics in Language Disorders. 2005;25:220–230. [Google Scholar]
- Good RH, Simmons DC, Kameenui EJ. The importance and decision-making utility of a continuum of fluency-based indicators of foundational reading skills for third-grade high-stakes outcomes. Scientific Studies of Reading. 2001;5:257–288. [Google Scholar]
- Gravois TA, Rosenfield S. A multi-dimensional framework for evaluation of instructional consultation teams. Journal of Applied School Psychology. 2002;19:5–29. [Google Scholar]
- Greenwood DJ, Levin M. Introduction to action research: Social research for social change. Thousand Oaks, CA: Sage; 1998. [Google Scholar]
- Gresham FM. Responsiveness to intervention: An alternative approach to the identification of learning disabilities. In: Bradley R, Danielson L, Hallahan DP, editors. Identification of learning disabilities: Research to practice. Mahwah, NJ: Lawrence Erlbaum; 2002. pp. 467–519. [Google Scholar]
- Grigorenko EL, Sternberg RJ. Dynamic testing. Psychological Bulletin. 1998;124:75–111. [Google Scholar]
- Grigorenko EL, Sternberg RJ, Jukes M, Alcock K, Lambo J, Ngorosho D, et al. Effects of antiparasitic treatment on dynamically and statically tested cognitive skills over time. Journal of Applied Developmental Psychology. 2006;27:499–526. [Google Scholar]
- Guterman E. Toward dynamic assessment of reading: Applying metacognitive awareness guidance to reading assessment tasks. Journal of Research in Reading. 2002;25:283–298. [Google Scholar]
- Guthke J. The learning test concept—An alternative to the traditional static intelligence test. German Journal of Psychology. 1982;6:306–324. [Google Scholar]
- Guthke J. Learning tests: The concept, main research findings, problems and trends. Learning and Individual Differences. 1992:137–151. [Google Scholar]
- Guthke J, Beckmann JF. The learning test concept and its application in practice. In: Lidz C, Elliott JG, editors. Dynamic assessment: Prevailing models and applications. New York: Elsevier; 2000. pp. 17–69. [Google Scholar]
- Guthke J, Beckmann JF. Dynamic assessment with diagnostic problems. In: Sternberg RJ, Lautrey J, Lubart TI, editors. Models of intelligence: International perspectives. Washington, DC: American Psychological Association; 2003. pp. 227–242. [Google Scholar]
- Guthke J, Beckmann JF, Dobat H. Dynamic testing—Problems, uses, trends and evidence of validity. Educational and Child Psychology. 1997;14:17–32. [Google Scholar]
- Hale JB, Naglieri JA, Kaufman AS, Kavale KA. Specific learning disability classification in the new Individuals with Disabilities Education Act: The danger of good ideas. School Psychologist. 2004;58:6–13. [Google Scholar]
- Hamers JHM, Pennings A, Guthke J. Training-based assessment of school achievement. Learning and Instruction. 1994;4:347–360. [Google Scholar]
- Hamers JHM, Resing WCM. Learning potential assessment: Introduction. In: Hamers JHM, Sijtsma K, Ruijssenaars AJJM, editors. Learning potential assessment: Theoretical, methodological and practical issues. Lisse, Netherlands: Swets & Zeitlinger; 1993. pp. 23–41. [Google Scholar]
- Haney MR, Evans JG. National survey of school psychologists regarding use of dynamic assessment and other nontraditional assessment techniques. Psychology in the Schools. 1999;36:295–304. [Google Scholar]
- Haring N, Bateman B. Teaching the learning disabled child. Englewood Cliffs, NJ: Prentice Hall; 1977. [Google Scholar]
- Harry B, Klingner J. Why are so many minority students in special education? Understanding race & disability in schools. New York: Teachers College Press; 2006. [Google Scholar]
- Haywood HC, Lidz C. Dynamic assessment in practice. New York: Cambridge University Press; 2007. [Google Scholar]
- Haywood HC, Miller MB. Dynamic assessment of adults with traumatic brain injuries. Journal of Cognitive Education and Psychology. 2003;3:137–163. [Google Scholar]
- Heller KA, Holtzman WH, Messick S, editors. Placing children in special education: A strategy for equity. Washington, DC: National Academies Press; 1982. [Google Scholar]
- Hessels MGP. Ethnic differences in learning potential test scores: Research into item and test bias in the Learning Potential Test for Ethnic Minorities. Journal of Cognitive Education. 1996;5:133–153. [Google Scholar]
- Hessels MGP. Low IQ but high learning potential: Why Zeyneb and Moussa do not belong in special education. Educational and Child Psychology. 1997;14:121–136. [Google Scholar]
- Hessels MGP. The Learning Potential Test for Ethnic Minorities (LEM): A tool for standardized assessment of children in kindergarten and the first years of primary school. In: Lidz C, Elliott JG, editors. Dynamic assessment: Prevailing models and applications. New York: Elsevier; 2000. pp. 109–131. [Google Scholar]
- Hessels-Schlatter C. A dynamic test to assess learning capacity in people with severe impairments. American Journal of Mental Retardation. 2002;5:340–351. doi: 10.1352/0895-8017(2002)107<0340:ADTTAL>2.0.CO;2. [DOI] [PubMed] [Google Scholar]
- Hollenbeck AF. From IDEA to implementation: A discussion of foundational and future responsiveness-to-intervention research. Learning Disabilities Research & Practice. 2007;22:137–146. [Google Scholar]
- Humphries T, Krogh K, McKay R. Theoretical and practical considerations in the psychological and educational assessment of the student with intractable epilepsy: Dynamic assessment as an adjunct to static assessment. Seizure. 2001;10:173–180. doi: 10.1053/seiz.2000.0490. [DOI] [PubMed] [Google Scholar]
- Hwa-Froelich DA, Matsuo H. Vietnamese children and language-based processing tasks. Language, Speech, and Hearing Services in Schools. 2005;36:230–243. doi: 10.1044/0161-1461(2005/023). [DOI] [PubMed] [Google Scholar]
- Ikeda MJ, Gustafson JK. Heartland AEA 11’s problem solving process: Impact on issues related to special education (No. 2002-01) Johnston, IA: Heartland Area Educational Agency; 2002. p. 11. [Google Scholar]
- Ikeda MJ, Tilly WDI, Stumme J, Volmer L, Allison R. Agency-wide implementation of problem solving consultation: Foundations, current implementation, and future directions. School Psychology Quarterly. 1996;11:228–243. [Google Scholar]
- Individuals with Disabilities Education Improvement Act, 20 U.S.C. § 1400 et seq (2004).
- Jacobs EL. The effects of adding dynamic assessment components to a computerized preschool language screening test. Communication Disorders Quarterly. 2001;22:217–226. [Google Scholar]
- Jacobs EL, Coufal KL. A computerized screening instrument of language learnability. Communication Disorders Quarterly. 2001;22:67–75. [Google Scholar]
- Jeltova I, Birney D, Fredine N, Jarvin L, Sternberg RJ, Grigorenko EL. Dynamic assessment as a process-oriented assessment in educational settings. Advances in Speech Language Pathology. 2007;9:1–13. [Google Scholar]
- Jitendra AK, Kameenui EJ. Dynamic assessment as a compensatory assessment approach: A description and analysis. Remedial & Special Education. 1993;14:6–18. [Google Scholar]
- Kalyuga S, Sweller J. Rapid dynamic assessment of expertise to improve the efficiency of adaptive e-learning. Educational Technology Research and Development. 2005;53:83–93. [Google Scholar]
- Kamps DM, Greenwood CR. Formulating secondary-level reading interventions. Journal of Learning Disabilities. 2005;38:500–509. doi: 10.1177/00222194050380060501. [DOI] [PubMed] [Google Scholar]
- Kamps DM, Wills HP, Greenwood CR, Thorne S, Lazo JF, Crockett JL, et al. Curriculum influences on growth in early reading fluency for students with academic and behavioral risks: A descriptive study. Journal of Emotional and Behavioral Disorders. 2003;11:211–224. [Google Scholar]
- Kanevsky L. Learning potentials of gifted students. Roeper Review. 1995;17:157–163. [Google Scholar]
- Kanevsky L, Geake J. Inside the zone of proximal development: Validating a multifactor model of learning potential with gifted students and their peers. Journal for the Education of the Gifted. 2004;28:182–217. [Google Scholar]
- Kavale KA. Identifying specific learning disability: Is responsiveness to intervention the answer? Journal of Learning Disabilities. 2005;38:553–562. doi: 10.1177/00222194050380061201. [DOI] [PubMed] [Google Scholar]
- Kendler HH. Politics and science: A combustible mixture. American Psychologist. 2004;59:122–123. doi: 10.1037/0003-066X.59.2.122. [DOI] [PubMed] [Google Scholar]
- Kinginger C. Defining the zone of proximal development in US foreign language education. Applied Linguistics. 2002;23:240–261. [Google Scholar]
- Kirkwood MW, Weiler MD, Holms Bernstein J, Forbes PW, Waber DP. Sources of poor performance on the Rey-Osterrieth Complex Figure Test among children with learning difficulties: A dynamic assessment approach. Clinical Neuropsychologist. 2001;15:345–356. doi: 10.1076/clin.15.3.345.10268. [DOI] [PubMed] [Google Scholar]
- Koomen HMY, Verschueren K, Thijs J. Assessing aspects of the teacher-child relationship: A critical ingredient of a practice-oriented psycho-diagnostic approach. Educational and Child Psychology. 2006;23:50–60. [Google Scholar]
- Kovaleski JFS, Glew MC. Bringing instructional support teams to scale: Implications of the Pennsylvania experience. Remedial and Special Education. 2006;27:16–25. [Google Scholar]
- Kozulin A. Learning potential assessment: Where is the paradigm shift? In: Pillemer DB, White SH, editors. Developmental psychology and social change: Research, history and policy. New York: Cambridge University Press; 2005. pp. 352–367. [Google Scholar]
- Kozulin A, Garb E. Dynamic assessment of literacy: English as a third language. European Journal of Psychology of Education. 2004;19:65–77. [Google Scholar]
- Lauchlan F, Elliott JG. The psychological assessment of learning potential. British Journal of Educational Psychology. 2001;71:647–665. doi: 10.1348/000709901158712. [DOI] [PubMed] [Google Scholar]
- Lens W. Theoretical research should be useful and used. International Journal of Psychology. 1987;22:453–461. [Google Scholar]
- Lewin K. Field theory in social science: Selected theoretical papers by Kurt Lewin. London: Tavistock; 1952. [Google Scholar]
- Lidz C. Dynamic assessment: Restructuring the role of school psychologists. NASP Communiqué. 1997;25:22–24. [Google Scholar]
- Lidz C, Elliott J, editors. Dynamic assessment: Prevailing models and applications. Greenwich, CT: Elsevier; 2000. [Google Scholar]
- Lidz C, Elliott J. Use of dynamic assessment with gifted students. Gifted Education International. 2006;21:151–161. [Google Scholar]
- Machek GR, Nelson JM. How should reading disabilities be operationalized? A survey of practicing school psychologists. Learning Disabilities Research & Practice. 2007;22:147–157. [Google Scholar]
- Marsh HW. Sport motivation orientations: Beware of jingle-jangle fallacies. Journal of Sport & Exercise Psychology. 1994;16:365–380. [Google Scholar]
- Marston DB. A curriculum-based measurement approach to assessing academic performance: What it is and why do it. In: Shinn MR, editor. Curriculum-based measurement: Assessing special children. New York: Guilford; 1989. pp. 18–78. [Google Scholar]
- Marston DB. Tiers of intervention in responsiveness to intervention: Prevention outcomes and learning disabilities identification patterns. Journal of Learning Disabilities. 2005;38:539–544. doi: 10.1177/00222194050380061001. [DOI] [PubMed] [Google Scholar]
- Marston DB, Muyskens P, Lau MYY, Canter A. Problem-solving model for decision making with high-incidence disabilities: The Minneapolis experience. Learning Disabilities Research & Practice. 2003;18:187–200. [Google Scholar]
- Mastropieri MA, Scruggs TE. Feasibility and consequences of response to intervention: Examination of the issues and scientific evidence as a model for the identification of individuals with learning disabilities. Journal of Learning Disabilities. 2005;38:525–531. doi: 10.1177/00222194050380060801. [DOI] [PubMed] [Google Scholar]
- Meijer J, Elshout JJ. The predictive and discriminant validity of the zone of proximal development. British Journal of Educational Psychology. 2001;71:93–113. doi: 10.1348/000709901158415. [DOI] [PubMed] [Google Scholar]
- Meijer J, Riemersma F. Teaching and testing mathematical problem solving by offering optional assistance. Instructional Science. 2002;30:187–220. [Google Scholar]
- Miller L, Gillam RB, Peña ED. Dynamic assessment and intervention: Improving children’s narrative abilities. Austin, TX: ProEd Publishing; 2001. [Google Scholar]
- Moore-Brown BJ, Montgomery JK, Bielinski J, Shubin J. Responsiveness to intervention: Teaching before testing helps avoid labeling. Topics in Language Disorders. 2005;25:148–167. [Google Scholar]
- Morris RD, Stuebing KK, Fletcher JM, Shaywitz SE, Lyon GR, Shankweiler DP, et al. Subtypes of reading disability: Variability around a phonological core. Journal of Educational Psychology. 1998;90:347–373. [Google Scholar]
- NCLD. IDEA parent guide. 2006 Retrieved March 24, 2007, from http://www.LD.org.
- No Child Left Behind Act, 20 U.S.C. 70 § 6301 et seq (2002).
- Nowotny H, Scott P, Gibbons M. Rethinking science: Knowledge and the public in an age of uncertainty. Malden, MA: Blackwell; 2001. [Google Scholar]
- O’Connor RE, Fulmer D, Harty KR, Bell KM. Layers of reading intervention in kindergarten through third grade: Changes in teaching and student outcomes. Journal of Learning Disabilities. 2005;38:440–455. doi: 10.1177/00222194050380050701. [DOI] [PubMed] [Google Scholar]
- O’Connor RE, Harty KR, Fulmer D. Tiers of intervention in kindergarten through third grade. Journal of Learning Disabilities. 2005;38:532–538. doi: 10.1177/00222194050380060901. [DOI] [PubMed] [Google Scholar]
- OSEP. Specific learning disabilities: Building consensus for identification and classification. 2001 Retrieved March 24, 2007, from http://www.nrcld.org/resources/ldsummit/index.shtml.
- Peña ED. Measurement of modifiability in children from culturally and linguistically diverse backgrounds. Communication Disorders Quarterly. 2000;21:87–97. [Google Scholar]
- Peña ED, Iglesias A, Lidz CS. Reducing test bias through dynamic assessment of children’s word learning ability. American Journal of Speech-Language Pathology. 2001;10:138–154. [Google Scholar]
- Peña ED, Quinn R. Task familiarity: Effects on the test performance of Puerto Rican and African American children. Language, Speech, and Hearing Services in Schools. 1997;28:323–332. doi: 10.1044/0161-1461.2804.323. [DOI] [PubMed] [Google Scholar]
- Resing WCM. Learning potential assessment: The alternative for measuring intelligence? Educational and Child Psychology. 1997;14:68–82. [Google Scholar]
- Rey A. A method for assessing educability. Archives de Psychologie. 1934;53:297–337. [Google Scholar]
- Roseberry CA, Connell PJ. The use of an invented language rule in the differentiation of normal and language-impaired Spanish-speaking children. Journal of Speech & Hearing Research. 1991;34:596–603. doi: 10.1044/jshr.3403.596. [DOI] [PubMed] [Google Scholar]
- Ryba K. Dynamic assessment and programme planning for students with intellectual disabilities. New Zealand Journal of Psychology. 1998;27:3–10. [Google Scholar]
- Ryba K, Brown M. The teacher as researcher: Studying approaches to teaching and learning with information technology. Computers in New Zealand Schools. 1994;6:3–5. [Google Scholar]
- Sarason SB. The nature of problem solving in social action. American Psychologist. 1978;33:370–380. [Google Scholar]
- Scanlon DM, Gelzheiser LM, Vellutino FR, Schatschneider C, Sweeney JM. Reducing the incidence of early reading difficulties: Professional development for classroom teachers versus direct interventions for children. Learning and Individual Differences. 2008;18:346–359. doi: 10.1016/j.lindif.2008.05.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schlatter C, Büchel FP. Detecting reasoning abilities of persons with moderate mental retardation: The Analogical Reasoning Learning Test (ARLT) In: Lidz C, Elliott JG, editors. Dynamic assessment: Prevailing models and applications. New York: Elsevier; 2000. pp. 155–186. [Google Scholar]
- Schulte AC. Has dynamic testing lived up to its potential? A review of Sternberg and Grigorenko’s Dynamic Testing. School Psychology Quarterly. 2004;19:88–92. [Google Scholar]
- Shapiro ES, Solari E, Petscher Y. Use of a measure of reading comprehension to enhance prediction on the state high stakes assessment. Learning and Individual Differences. 2008;18:316–328. doi: 10.1016/j.lindif.2008.03.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Silberglitt B, Hintze JM. How much growth can we expect? A conditional analysis of R-CBM growth rates by level of performance. Exceptional Children. 2007;74:71–84. [Google Scholar]
- Speece DL, Case LP. Classification in context: An alternative approach to identifying early reading disability. Journal of Educational Psychology. 2001;93:735–749. [Google Scholar]
- Stanovich KE, Siegel LS. Phenotypic performance profile of children with reading disabilities: A regression-based test of the phonological-core variable-difference model. Journal of Educational Psychology. 1994;86:24–53. [Google Scholar]
- Staskowski M, Rivera EA. Speech-language pathologists’ involvement in responsiveness to intervention activities: A complement to curriculum-relevant practice. Topics in Language Disorders. 2005;25:132–147. [Google Scholar]
- Sternberg RJ, Grigorenko EL. All testing is dynamic testing. Issues in Education. 2001;7:137–170. [Google Scholar]
- Sternberg RJ, Grigorenko EL. Dynamic testing. New York: Cambridge University Press; 2002. [Google Scholar]
- Sternberg RJ, Grigorenko EL, Ngorosho D, Tantufuye E, Mbise A, Nokes C, et al. Assessing intellectual potential in rural Tanzanian school children. Intelligence. 2002;30:141–162. [Google Scholar]
- Stone CA. The metaphor of scaffolding: Its utility for the field of learning disabilities. Journal of Learning Disabilities. 1998;31:344–364. doi: 10.1177/002221949803100404. [DOI] [PubMed] [Google Scholar]
- Swanson HL, Howard CB. Children with reading disabilities: Does dynamic assessment help in the classification? Learning Disability Quarterly. 2005;28:17–34. [Google Scholar]
- Swanson HL, Lussier CM. A selective synthesis of the experimental literature on dynamic assessment. Review of Educational Research. 2001;71:321–363. [Google Scholar]
- Telzrow CF, McNamara K, Hollinger CL. Fidelity of problem-solving implementation and relationship to student performance. School Psychology Review. 2000;29:443–461. [Google Scholar]
- Thorndike EL. An introduction to the theory of mental and social measurement. New York: John Wiley; 1924. [Google Scholar]
- Torgesen JK, Alexander AW, Wagner RK, Rashotte CA, Voeller KKS, Conway T. Intensive remedial instruction for children with severe reading disabilities: Immediate and long-term outcomes from two instructional approaches. Journal of Learning Disabilities. 2001;34:33–58. doi: 10.1177/002221940103400104. [DOI] [PubMed] [Google Scholar]
- Torgesen JK, Morgan ST, Davis C. Effects of two types of phonological awareness training on word learning in kindergarten children. Journal of Educational Psychology. 1992;84:364–370. [Google Scholar]
- Tymms P, Elliott J. Difficulties and dilemmas in the assessment of special educational needs. Educational and Child Psychology. 2006;23:25–34. [Google Scholar]
- Tzuriel D, Shamir A. The effects of mediation in computer assisted dynamic assessment. Journal of Computer Assisted Learning. 2002;18:21–32. [Google Scholar]
- Tzuriel D, Shamir A. The effects of peer mediation with young children (PMYC) on children’s cognitive modifiability. British Journal of Educational Psychology. 2007;77:143–165. doi: 10.1348/000709905X84279. [DOI] [PubMed] [Google Scholar]
- Ukrainetz TA, Harpell S, Walsh C, Coyle C. A preliminary investigation of dynamic assessment with Native American kindergartners. Language, Speech, and Hearing Services in Schools. 2000;31:142–154. doi: 10.1044/0161-1461.3102.142. [DOI] [PubMed] [Google Scholar]
- van der Aalsvoort GM, Harinck FJH. Scaffolding: Classroom teaching behavior for use with young students with learning disabilities. Journal of Classroom Interaction. 2001;36:29–39. [Google Scholar]
- VanDerHeyden AM, Witt JC, Barnett DW. The emergence and possible futures of response to intervention. Journal of Psychoeducational Assessment. 2005;23:339–361. [Google Scholar]
- VanDerHeyden AM, Witt JC, Gilbertson D. A multi-year evaluation of the effects of a response to intervention (RTI) model on identification of children for special education. Journal of School Psychology. 2007;45:225–256. [Google Scholar]
- Vaughn S, Fletcher JM, Francis DJ, Denton CA, Wanzek J, Cirino PT, et al. Response to intervention with older students with reading difficulties. Learning and Individual Differences. 2008;18:338–345. doi: 10.1016/j.lindif.2008.05.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vaughn S, Fuchs LS. Redefining learning disabilities as inadequate response to instruction: The promise and potential problems. Learning Disabilities Research & Practice. 2003;18:137–146. [Google Scholar]
- Vaughn S, Linan-Thompson S, Hickman P. Response to instruction as a means of identifying students with reading/learning disabilities. Exceptional Children. 2003;69:391–409. [Google Scholar]
- Vellutino FR, Scanlon DM, Lyon GR. Differentiating between difficult-to-remediate and readily remediated poor readers: More evidence against the IQ-achievement discrepancy definition of reading disability. Journal of Learning Disabilities. 2000;33:223–238. doi: 10.1177/002221940003300302. [DOI] [PubMed] [Google Scholar]
- Vellutino FR, Scanlon DM, Sipay ER, Small SG, Pratt A, Chen R, et al. Cognitive profiles of difficult-to-remediate and readily remediated poor readers: Early intervention as a vehicle for distinguishing between cognitive and experiential deficits as basic causes of specific reading disability. Journal of Educational Psychology. 1996;88:601–638. [Google Scholar]
- Vellutino FR, Scanlon DM, Tanzman MS. The case for early intervention in diagnosing specific reading disability. Journal of School Psychology. 1998;36:367–397. [Google Scholar]
- Vygotsky LS. Thought and language. Cambridge, MA: MIT Press; 1962. (Original work published 1934) [Google Scholar]
- Wallace T, Espin CA, McMaster K, Deno SL, Foegen A. CBM progress monitoring within a standards-based system. Journal of Special Education. 2007;41:66–67. [Google Scholar]
- Wennergren AC, Ronnerman K. The relation between tools used in action research and the zone of proximal development. Educational Action Research. 2006;14:547–568. [Google Scholar]
- Wood D, Bruner JS, Ross G. The role of tutoring in problem solving. Journal of Child Psychology & Psychiatry & Allied Disciplines. 1976;17:89–100. doi: 10.1111/j.1469-7610.1976.tb00381.x. [DOI] [PubMed] [Google Scholar]
- Zaaiman H, van der Flier H, Thijs GD. Dynamic testing in selection for an educational programme: Assessing South African performance on the Raven Progressive Matrices. International Journal of Selection and Assessment. 2001;9:258–269. [Google Scholar]