Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Jun 22.
Published in final edited form as: Sci Stud Read. 2018 Jun 22;22(6):462–484. doi: 10.1080/10888438.2018.1481409

Examining the Efficacy of Targeted Component Interventions on Language and Literacy for Third and Fourth Graders Who are at Risk of Comprehension Difficulties

Carol McDonald Connor 1, Beth M Phillips 2, Young–Suk Grace Kim 1, Christopher J Lonigan 2, Michael P Kaschak 2, Elizabeth Crowe 3, Jennifer Dombek 2, Stephanie Al Otaiba 4
PMCID: PMC6434523  NIHMSID: NIHMS1515628  PMID: 30930619

Abstract

Testing a component model of reading comprehension in a randomized controlled trial, we evaluated the efficacy of four different interventions that were designed to target components of language and metacognition that predict children’s reading comprehension: vocabulary, listening comprehension, comprehension of literate language, academic knowledge, and comprehension monitoring. Third- and fourth-graders with language skills falling below age expectations participated (N = 645). Overall, the component interventions were only somewhat effective in improving the targeted skills, compared to a business-as-usual control (g ranged from −.14 to .33), and no main effects were significant after correcting for multiple comparisons. Effects did not generalize to other language skills or to students’ reading comprehension. Moreover, there were child-characteristic-by-treatment interaction effects. For example, the intervention designed to build sensorimotor mental representations was more effective for children with weaker vocabulary skills. Implications for component models of reading and interventions for children at risk of reading comprehension difficulties are discussed.

Keywords: comprehension monitoring, individual differences, instruction, reading comprehension, intervention


Improving reading comprehension skills has proven to be more difficult than anticipated (e.g., James-Burdumy et al., 2009). In an effort to understand why, the Reading for Understanding (RFU) Network was formed to take findings from basic research, translate this research into meaningful interventions to improve children’s reading for understanding, and then evaluate, in randomized controlled trials (RCTs), whether the new interventions were effective in improving reading comprehension. The aims are important because according to the 2015 National Assessment of Educational Progress (NAEP, https://nces.ed.gov/nationsreportcard/), only 36 percent of fourth graders in the US are reading at or above proficient levels. Moreover, for minority children (e.g., African American, Hispanic) this rate is much lower – 18–21%. It is also lower, 21%, for children living in poverty (i.e., qualify for the National School Lunch Program, NSLP). Of great concern; fully 44% of children eligible for the NSLP are reading below basic levels. The aims of the [blind] Reading for Understanding Network were threefold: (1) to identify the underlying cognitive and linguistic processes that contribute to successful text comprehension and comprehension for the purpose of learning; (2) to rapidly develop appropriate interventions that enhance these processes and determine which are effective for students with varying skill constellations; and (3) to test the efficacy and usability of these interventions. The randomized controlled study presented here was designed to meet these aims by evaluating the efficacy of interventions designed for third and fourth graders. The interventions targeted key linguistic and metacognitive skills. Basic research has revealed that these skills are associated with proficient reading comprehension (e.g., Hulme, Nash, Gooch, Lervåg, & Snowling, 2015).

Proficient reading comprehension is defined as the ability to “demonstrate a strong understanding of the text…to extend the ideas in the text by making inferences, drawing conclusions, and making connections to [students’] own experiences,” (National Assessment Governing Board, 2007, p. 24). Reading comprehension requires children to fluently decode and understand what they are reading (Rand Study Group & Snow, 2001; Rapp, van den Broek, McMaster, Kendeou, & Espin, 2007; Scarborough, 2001). Basic processes underlying reading comprehension are complex and call on the oral language system and a conscious understanding of this system, i.e., metalinguistic and metacognitive awareness, at all levels from semantic and morpho-syntactic to pragmatic awareness (Morrison, Bachman, & Connor, 2005). Higher order metacognitive skills also predict comprehension (Efklides & Misailidi, 2010; Rapp et al., 2007). Thus accumulating evidence indicates that multiple skills, such as grasp of complex syntax, comprehension monitoring and comprehension strategy use (Baker, 2008), awareness of text structure (Williams, Stafford, Lauer, Hall, & Pollini, 2009), and background knowledge (Rapp et al., 2007) support creating coherent mental representations (or situation models) (Kintsch, 1998; van den Broek, Risden, Fletcher, & Thorlow, 1996) and result in proficient reading for understanding.

The original framework for the design of the interventions reported here was the Florida State University RFU component model of reading comprehension, which was developed to guide this network’s projects. It builds on the Simple View model (Hoover & Gough, 1990; Joshi & Aaron, 2000) and extends other component models (e.g., Mellard & Fall, 2012). In this model, reading for understanding requires fluent decoding and word-level skills, proficient vocabulary strong syntax/listening comprehension skills, and the ability to bring automatic skills to the conscious level when the reading task demands it--reflective comprehension processes (e.g., metacognition). All of these processes are supported by developing text-specific, linguistic, metalinguistic and metacognitive processes (e.g., comprehension monitoring) in combination with the instruction children are receiving (Connor, 2016b; Kim, 2017). The theory of action is that if we can strengthen key component skills that contribute to proficient reading for understanding by explicitly teaching these skills through intervention, they can be learned, and can become more fluent and automatic. Therefore, reading comprehension should also improve. In this study, the interventions focused on improving key component skills that are associated with stronger reading comprehension: text structure knowledge, oral language (including vocabulary, listening comprehension, complex syntax and narrative skills, and academic knowledge), and comprehension monitoring.

The intent of each of these interventions, part of Comprehension Tools for Teachers (CTT), was to provide effective tier 2 instruction designed to improve one or more component skills that predict reading comprehension, within a Multi-tiered System of Support (MTSS, Gersten & Dimino, 2006). In this system, also known as Response to Intervention (RTI), children whose reading skills do not improve as expected, even when receiving effective general education literacy instruction (i.e., tier 1), are provided with more intense and targeted literacy instruction, often referred to as tier 2. Tier 2 instruction is typically provided in smaller flexible learning groups of students with similar learning goals, multiple times per week, for about 20–30 minutes, which was the model used in these CTT interventions. Components considered in the CTT Component Interventions are described below.

Text Structure Knowledge.

Text structure is the organization found in various types of texts (Graesser, Golding, & Long, 1991; Tompkins & McGee, 1989). For example, we know that the topic sentence in a well-crafted paragraph frames the content of the paragraph. Headings help us locate specific information. There is accumulating evidence that when students are explicitly taught to attend to text structure, their comprehension skills improve. Text structure knowledge may aid in text comprehension because it provides schemas to organize and understand new information (Graesser et al., 1991; Stevens, Van Meter, & Warcholak, 2010; Tompkins & McGee, 1989; Williams et al., 2005; Williams et al., 2009). In narratives, important elements of a story (i.e., text structure) typically include main characters, the setting, an initiating event, a problem, attempts to solve the problem, and the resolution (Baker & Stein, 1981; Fitzgerald, 1989). Expository text structure (e.g., informational text), incorporates discourse devices such as “first, next, last” to identify sequences and “if, then” and “because” to identify logical and causal relations. Moreover, because fluent text structure knowledge may reduce cognitive load, one would predict that children’s comprehension skills and knowledge gained from the text (i.e., academic knowledge) should be supported.

Oral language skills (Vocabulary, Syntax, Semantic and Academic Knowledge).

Children’s oral language skills develop over time and increase in sophistication on a fairly predictable timetable although there are individual child differences that are a function of biological and environmental influences (Huttenlocher, Haight, Bryk, Seltzer, & Lyons, 1991; Locke, 1997). Whereas humans may learn to read at any age, it is quite likely that the developing linguistic system will influence the rate and efficient development of children’s literacy skills during the pre-K through elementary years (Scarborough, 1998). Reading and oral language are related throughout development (Catts, Fey, Zhang, & Tomblin, 1999; Hulme et al., 2015; Scarborough, 2001). Accumulating data reveal that their relations change as children mature (Kim & Wagner, 2015; Storch & Whitehurst, 2002) and are at times reciprocal (Connor et al., 2016), with experience in each potentially affecting skill acquisition in the other; however, this depends on children’s age and the instruction they receive.

For 3rd and 4th graders, foundational linguistic skills are generally well-developed although there is large variation among students – they are able to carry on relatively sophisticated conversations, monitor their understanding of the conversation (e.g., by asking questions), and have an emerging grasp of metalinguistics (e.g., idioms, puns, Caillies & Le Sourn-Bissaoui, 2008; Cain, Towse, & Knight, 2009). Evidence indicates that successful application of linguistic skills to the process of comprehending texts requires language and metalinguistic skills (Cain, Oakhill, & Bryant, 2004; Cromley & Azevedo, 2007; Kim, 2016, 2017; Kim & Pilcher, 2016). Competency at consciously applying these linguistic and metalinguistic skills to the task of reading comprehension continues to improve as children move from achieving fluent decoding skills to learning from text (Chall, 1996). Those children who are struggling to comprehend or who have weaker language skills may need more explicit instruction in complex language skills, including more advanced vocabulary, listening comprehension, comprehension of literate language, and academic/semantic knowledge (Clarke, Snowling, Trulove, & Hulme, 2010; van der Lely & Marshall, 2010).

Metacognitive skills, or the conscious thinking about thinking (Flavell, 1979), support students’ reading for understanding. A key metacognitive skill explored in this study was comprehension monitoring, which is recognizing when one does not understand and then employing repair strategies to improve understanding (i.e., comprehension monitoring, Baker, 1984; Chen, 2009; Ruffman, 1999), thereby constructing coherent situation models and coherent mental representations of texts (e.g., Cain, Oakhill, & Bryant, 2004; Elliot-Faust & Pressley, 1986; Kim, 2015, 2016, 2017; Kim & Phillips, 2014; Rapp et al., 2007; Skarakis-Doyle, Dempsey, & Lee, 2008). When understanding breaks down, students should understand how to employ repair strategies. Another strategy considered here is creating sensorimotor simulations of text, which is associated with stronger reading comprehension for younger children (Glenberg & Kaschak, 2002; Zwaan & Taylor, 2006). Consciously enacting simulations that include motor actions may support the grasp of more semantically and cognitively abstract ideas, such as the opposing forces that underlie arguments or interpersonal conflict (Lakoff & Johnson, 1980).

The Component Interventions

The component intervention included Comprehension Monitoring and Providing Awareness of Story Structure (COMPASS), Language in Motion, and Enacted Reading Comprehension (Enacted RC) in Grade 3; and Enacted RC and Teaching Expository Text Structure (TEXTS) in Grade 4.

COMPASS (Comprehension Monitoring and Providing Awareness of Story Structure).

COMPASS explicitly focuses on three component skills of reading comprehension: vocabulary, comprehension monitoring, and text structure knowledge (Marulis & Neuman, 2010). Each lesson has a scaffolded learning format of I-do, we-do, and you-do (Pearson & Gallagher, 1983) and overall the lessons are sequenced such that each component’s instruction increased progressively in difficulty over time.

For comprehension monitoring, students heard a very short story (e.g., 2 sentences) with an accompanying illustration, and were asked whether the story made sense. All the stories were told with illustrations in the first 4 weeks. Prompt illustrations were progressively removed in the second 4 weeks. At the end of each comprehension monitoring portion of the session, children were told that “when they listen to a story, things have to make sense to them, and if they do not, they have to stop and ask questions.” The comprehension monitoring portion of the lesson took approximately 5 minutes (see Kim & Phillips, 2016 for a similar structure but for prekindergartners).

Vocabulary and text structure lessons were taught using researcher-developed narratives, which were written to have a clear text structure in terms of characters, setting, leading events, problem, and resolution. The stories, which focused on themes of “traditions in different cultures”, included characters to whom participating children could relate in terms of age, gender, and racial backgrounds, and aligned to 3rd grade social studies standards. Each story had 4 to 6 accompanying illustrations that showed the sequence of events. One story was used each week and accompanied by activities including reading aloud, dialogic reading questions, and retell. Key target words from every story were identified and explicitly taught in each lesson (e.g., tradition, miracle). Children were also explicitly taught about text structure – characters, setting, initiating events, problem, efforts to solve the problem, and resolution – using visual aids as well as a song. Retell was first modeled by interventionists, followed by opportunity for children to practice using story illustrations. Students read the text as they listened to the orally presented narrative; they were taught to mark text (e.g., circle) where they found target vocabulary words and information relevant to text structure. We anticipated that COMPASS would have an effect on comprehension monitoring, comprehension of literate language, listening comprehension and vocabulary skills.

Language in Motion.

Language in Motion was designed to support student’s understanding and expressive use of literate language features known to differentiate text from typical oral language (e.g., Benson, 2009; Zipoli Jr., 2017), which can be difficult for children during both oral and reading comprehension (e.g., Megherbi & Ehrlich, 2005; Oakhill, Cain, & Bryant, 2003). Targeted language elements for third grade included relative clauses, passive sentence structures, and resolution of anaphoric pronoun references; one unit also focused on figurative language devices of metaphor, simile and common idioms. In addition, each four-week unit included lessons on targeted mental state vocabulary words, which were also embedded in the texts, focusing on the manner in which different words distinctly represented levels of certainty (i.e., speculate, conclude; e.g., Moore, Pure, & Furrow, 1990; Schwanenflugel, Fabricius, & Noyes, 1996). All Language in Motion lessons were embedded within a narrative adventure story (a haunted house in 3rd grade) and science themes related to basic science concepts (e.g., in 3rd grade, how sound and light travel in waves). Hands-on science activities and two-dimensional story characters and storyboards supported highly interactive, game-like lessons in which new language content was first modeled within oral narratives and activity instructions and then students were challenged to correctly respond behaviorally to these instructions and to use new linguistic content in their verbal responses. Lesson scripts provided interventionists with examples of how to support scaffolding for students who found lessons difficult and upward extensions for students who more quickly mastered new content. We hypothesized that Language in Motion would have an effect on vocabulary, listening comprehension and comprehension of literate language.

Enacted RC.

In this intervention we applied the principles of embodied cognition, which hold that linguistic understanding emerges through the construction of sensorimotor simulations (Glenberg, Gutierrez, Levin, Japunitich, & Kaschak, 2004; Glenberg & Kaschak, 2002; Zwaan & Taylor, 2006). For third and fourth graders, we conjectured that motor actions might support the grasp of more abstract and science-related ideas, such as the opposing forces that underlie earthquakes and weather, and in narrative, arguments or interpersonal conflict (Lakoff & Johnson, 1980), by helping students consciously use embodied cognition as a strategy for improving understanding of the texts (Connor et al., 2014; Glenberg, Brown, & Levin, 2007; Kaschak, Connor, & Dombek, 2017). In particular, we focused on systematically building students’ understanding of abstract concepts starting with concrete simulation strategies to understand opposing forces in earthquakes (moving hands together to simulate the earthquakes; shaking a toy house full of toy furniture), then moving to more abstract concepts of opposing forces in hurricanes and tornados (waving arms/hands to simulate the circling winds), persuasive/argumentative texts (circling one hand for one side of the argument and circling the other had for the opposing argument). For the final phase of the intervention, students read the novel, A Single Shard (Park, 2001). These lessons focused on understanding internal conflicts and moral dilemmas faced by the main characters of the book in the context of opposable forces that could be illustrated with gestures. Lessons were presented in three phases. In Phase 1 (about 3 weeks), the most concrete phase, children read books on earthquakes, hurricanes and tornados. In Phase 2 (about 2 weeks) children read and wrote persuasive texts focusing on the opposing forces of opinion. In Phase 3 (about 5 weeks) they read A Single Shard (Park, 2001) and discussed how moral dilemmas are like opposing forces and opposing opinions. Each student received their own copy of A Single Shard, which they were allowed to keep. In this last phase children read the book during lessons and at home. We hypothesized that Enacted RC would have an effect on children’s academic knowledge through experience with science texts, vocabulary, and possibly their comprehension monitoring because sensori-motor simulations can be used as a comprehension repair strategy. However, comprehension monitoring was not a direct focus of the Enacted RC intervention.

TEXTS.

The TEXTS intervention is designed to explicitly teach students about expository text structure, and to emphasize how certain words can signal, or serve as a “clue,” that an expository text has a particular structure. The predominant text structures that shaped TEXTS are sequencing, or the order of events; cause and effect, or why things occur; compare and contrast, or how things are the same or different; and problem and solution, or how to solve issues (Meyer & Poon, 2001).

Our intent was to have a series of units with different expository text structures to support struggling readers’ comprehension of academic content. TEXTS books were written based upon common topics and state standards for fourth grade (e.g., The Food Chain, The Scientific Method, Pioneer Homes, Making Good Choices). The design team developed a scope and sequence for each book unit, with scripted lesson plans for project interventionists to use with small groups. Each unit lasted a week. On the first day, interventionists spent the first 5 mins introducing signal words and the notion of searching for certain words that provide a clue about the type of text structure (e.g., first, next, last for sequencing). Then students read the book for about 10 mins, and the final 5–10 mins involved application and practice. The “wrap up” was brief, and interventionists asked students to review what they had learned (note this was the final stage for each day of instruction). On the second day, interventionists and students played a signal, or clue word, memory game, guided students through another read of the story, and instructed students how to use a graphic organizer to portray the sequence of the text. On the third day, interventionists led children in review games (such as clue word bingo), and they guided students to make stories that followed the text structure genre of the week. These were brief comic-strip-like stories. On the fourth day, interventionists and students reviewed the clue words, then, to promote transfer, interventionists introduced a story that had the relevant text structure but that had no clue words and encouraged students to think about what words would best signal the text structure.

We hypothesized that TEXTS would improve struggling readers’ listening comprehension and academic knowledge. We conjectured that with children’s improved understanding of expository text structure they would be better able to understand more complex language structures and would gain academic knowledge through reading expository text.

The Current Study

This study was part of a larger study on the efficacy of the CTT component interventions (Connor et al., 2014) from pre-kindergarten through fourth grade (Phillips et al., in preparation; Kim et al., in preparation) with over 8000 children screened and over 3600 children participating in the studies. This study details the third and fourth grade findings. The overarching aim of the current study was to test aspects of the component model of reading for understanding – that improving third and fourth graders’ key component skills would also improve their reading comprehension. We also conjectured that explicit interventions for these component skills would be more effective for children with weaker language and literacy skills, many of whom attend schools serving a higher proportion of children living in poverty. Thus, the following research questions guided this study:

  1. To what extent do the component interventions impact the linguistic and cognitive skills they were designed to improve?

  2. To what extent are there child characteristic X treatment interaction effects. Specifically, are the interventions generally more effective for children with weaker language and decoding skills?

  3. To what extent do the effects of the interventions also improve reading comprehension and other key component skills?

Method

Participants

Overall, the final sample included 338 third- and 307 fourth-grade students attending 33 and 31 schools, respectively. Students in the study came from 135 third-grade classrooms and 115 fourth-grade classrooms. Participants were recruited from schools where at least 40% of the children were eligible for the National School Lunch Program (NSLP), 52% of the students were girls, 39% were African American, 53% were White, 3% were multi-racial, and the remaining children belonged to other ethnicities. On average, the third graders were 8.8 years of age (SD=.50) and the fourth graders were 9.8 years of age (SD = .50) at the time of the initial screening at the beginning of the school year. A CONSORT map is provided in Figure 1, which details the procedure for selecting students and inclusion criteria (i.e., vocabulary SS < 98). Parent consent was obtained and screening completed for 1106 third graders and 1052 fourth graders. Overall attrition was 13%.

Figure 1.

Figure 1.

CONSORT Flow diagram of the Comparative Efficacy Study for 3rd and 4th graders. Of them, 401 third graders and 344 fourth graders met the criteria for entrance into the study, which was a vocabulary standard score (mean = 100, SD = 15) falling below 98 on the Expressive One Word Picture Vocabulary Test (Gardner, 1990). These students were then randomized to conditions. Of these, 63 third graders and 37 fourth graders left the study because they moved out of the district after randomization or were otherwise unavailable for post-testing (attrition of 16% and 11% respectively).

Measures

Detailed descriptions of the measures are provided in Table 1. We included multiple measures across three domains: vocabulary, syntax/listening comprehension (Lonigan & Milburn, 2017), and comprehension monitoring. We also included two measures of reading comprehension and measures of word reading. The latter were included to test for child X treatment interaction effects; no intervention specifically targeted fluent word reading. Measures were administered to all students in a quiet place in their school. Group administered assessments were conducted in the students’ classroom.

Table 1.

Measures used in the Comparative Efficacy Study for Third and Fourth Graders.

Name and citation Description Reliability
Vocabulary
Expressive One Word Picture Vocabulary Test, Third Edition (EOWPVT) (Brownell, 2000) The EOWPVT-4 assesses students’ vocabulary through their ability to verbally label, with one word, objects, actions, and concepts when presented with color illustrations. Pictures and words become increasingly unfamiliar. Normed for ages 2–18 years, internal consistency for ages 3–10 is acceptable (e.g., Cronbach’s alpha range α = .93 - .97 and split-half reliability coefficients, corrected for the full length of the test range .96 - .98).
Clinical Evaluation of Language Fundamentals-4 Expressive Vocabulary (CELF EV) (Semel, Wiig, & Secord, 2003) This assessment evaluates the student’s vocabulary through their ability to name illustrations of people, objects, and actions, including referential naming. Normed for ages 5–9 years, reliability is acceptable with internal consistency reliability coefficient alphas for ages 5 – 9:11 years ranging from α= .80 - .85; Test-retest stability coefficients (Corrected) for ages 6 – 9:11 years ranging from .87 - .91.
Woodcock Johnson-III Academic Knowledge (WJ AcKnow) (Woodcock, McGrew, & Mather, 2001) Assesses students’ academic knowledge in science, social studies, and the humanities by asking questions with picture support for some items. Students provide verbal responses to the questions. The assessment has a median reliability of .88 in the 5–19 year range.
Syntax/Listening Comprehension
Comprehensive Assessment of Spoken Language - Syntax Construction (CASL S) (Carrow-Woolfolk, 2008). This assessment captures students’ morphosyntactic abilities, asking them to formulate and express sentences using a variety of morphosyntactic rules. Normed for ages 3–12 years, internal consistency using Rasch split-half method(odd/even) for ages 3–10 years is .73 - .87; test-retest reliability for Syntax Construction (corrected) is 0.74 for third and fourth graders.
CELF Concepts and Following Directions (CELF CFD) (Semel et al., 2003) assesses listening comprehension by asking students to interpret and follow directions of increasing length and complexity (names, characteristics, and order of mention) using logical operations (e.g., before/after, tallest,, etc.). Cronbach’s alpha for ages 5–10:11 years ranges .73-.92.
Oral and Written Language Scales (OWLS) - Listening Comprehension scale (Carrow-Woolfolk, 1995) assesses students’ ability to listen to and comprehend spoken language. Items are presented verbally and students point to the one of four pictures that best represents the item. Split-half reliability is (alpha) .96-.98.
Test of Narrative Language Skills (TNL) (Gillam & Pearson, 2004) measures students’ ability to answer literal and inferential comprehension questions about narrative text Cronbach’s alpha = .74
Comprehension Monitoring
Inconsistency Detection (InconDetect) (Kim, 2017; Kim & Phillips, 2014). This researcher-developed task is designed to assess students’ comprehension monitoring during listening comprehension. Students listen to short scenarios with one inconsistent sentence, which the student is challenged to identify Cronbach’s alpha = .89;
Reading Comprehension Measures
Test of Silent Reading Efficiency and Comprehension (TOSREC) (Wagner, Torgesen, Rashotte, & Pearson, 2010) assesses the efficiency (i.e., speed and accuracy) with which student can read connected text silently for comprehension; Students are instructed: I want you to read some sentences and decide whether the answer is “yes” or “no”. Look at the sample sentence. It says, “A cow is an animal.” Is that true? (Pause for response) Because the answer is “yes”, you should circle “yes” in the box. Alternate-form (immediate administration) reliability coefficients is acceptable and range from .82 - .96 for grades 3–5.
Gates MacGinitie Reading Test (GATES) (MacGinitie & MacGinitie, 2006) is group administered and evaluates students’ abilities to understand extended written text. Students read passages and answer multiple choice questions of increasing complexity. The published reliability for this test is .96.
Word Reading
Woodcock Johnson -III Letter Word Identification (WJ LIWID) (Woodcock et al., 2001) measures students’ ability to read printed letters and printed words correctly by reading lists of increasingly complex and unfamiliar words. The assessment has a median reliability of .91 in the age 5–19 year range.
The Test of Word Reading Efficiency–Second Edition (TOWRE) (Torgesen, Wagner, & Rashotte, 1999) is a measure of an individual’s ability to pronounce printed words (Sight Word Efficiency, SWE) and phonemically regular non-words (Phonemic Decoding Efficiency, PDE) accurately and fluently. These timed tests are normed for persons from 6–24 years of age. Given that Cronbach’s alpha and Split-half coefficients are considered inappropriate for timed tests, alternate forms reliability procedures for students ages 6 – 10 years ranged from .93 - .97. Further, test-retest reliability for Forms A and B for ages 6–18 years ranged from r = .84 - .97.

Procedures

Assignment to intervention condition.

Because it was not possible to have every intervention-group pair represented in each grade in every school, an incomplete-random-blocks design was used to assign students within schools to intervention conditions. Blocks representing each pairing of intervention conditions were randomized to school and order of use within each grade with the restriction that each block was roughly equally represented across schools. Children who qualified for the study within each grade were ordered by their score on the qualifying measure in groups of five to nine children (i.e., the minimum and maximum number of children that could be formed into two small-group instructional units) and randomized within block to one of the intervention conditions represented by their assignment block. Information about professional development and attendance is provided in the Supplementary Online Materials.

Fidelity Results.

In general, CTT interventions in these two grades were delivered with strong fidelity (i.e., adherence and quality) with the highest possible rating equal to 5. In 3rd grade, the average ratings for COMPASS were 4.21 (SD = .77) for adherence and 4.41 (SD = .64) for quality. For Language in Motion, average ratings were 4.20 (SD = .62) for adherence and 3.84 (SD = .77) for quality. Likewise, Enacted RC in 3rd grade demonstrated averages of 4.61 (SD = .58) and 4.62 (SD = .53) for adherence and quality, respectively. In 4th grade, Enacted RC’s average ratings were 4.71 (SD =.49) for adherence and 4.71 (SD = .50) for quality. Average ratings for TEXTS were 4.70 (SD = .36) and 4.93 (SD = .20) for adherence and quality, respectively.

Intervention Procedures.

Each component intervention was provided four days/week for 30 minutes. Lessons were implemented in small groups of between 4–5 students by a trained interventionist who was part of the research team. Each intervention lasted between 10–12 weeks.

Business as Usual Instruction.

Based on brief observations of classrooms from which children were selected, instruction was of generally good quality and followed the districts’ core literacy curriculum. Cores observed included Treasures, Wonders, Open Court Imagine, and Journeys. These core literacy curricula were approved for use in districts by the State of Florida at the time of the study. Of note, instructional focus in third and fourth grade was on reading comprehension, strategies, discussions about texts, building vocabulary in context, writing, and decoding/encoding. Based on our informal observations and discussions with teachers, as well as a survey we administered in the spring, it is unlikely that teachers provided the intensive focus provided by the CTT component interventions.

Results

Preliminary Analyses

Descriptive statistics for standardized measures at pretest by intervention group are shown in Table S.1 for third-grade children and in Table S.2 for fourth-grade children in the Supplementary Online Materials. Correlations are also provided in Tables S.1 and S.2. As expected, all of the measures were moderately correlated with each other and, notably, the vocabulary, syntax/listening comprehension, and comprehension monitoring (Inconsistency Detection) measures were correlated with both reading comprehension measures although none of the correlations were large.

Differences at Baseline.

Mixed models were used to determine if there were any differences between intervention groups at pretest. For third grade, on both the Sight-Word Efficiency and the Phonological-Decoding Efficiency subtests of the TOWRE, the Language in Motion group scored higher than the Business-As-Usual (BAU) group (p = .02). For fourth grade, there was one statistically significant effect. On the Concepts and Following Directions subtest of the CELF, the TEXTS group scored higher than the BAU group (p < .03).

Overview of Impact Analysis

Our focus was on the two-group contrasts between the three (third grade) or two (fourth grade) experimenter-designed and delivered small-group interventions and the BAU group of children who received their regular classroom-based instruction only. Children were nested within schools and assignment block; therefore, all analyses used mixed models treating these two factors as random effects. For some outcomes, one or both of these random effects resulted in zero or negative residual variance, indicating that the nesting variable did not account for variance in the outcome. Random effects were removed from the models sequentially to eliminate zero or negative residual variance in the final model. All models included children’s ages, their raw scores on the EOWPVT at pretest, and their scores at pretest on the specific outcome variable in an analysis. The homogeneity of regression assumption of ANCOVA was evaluated by including terms representing the treatment-group-by-covariate interaction for all covariates, and nonsignificant interaction terms were removed from the final model. Follow-up analyses were conducted to determine the size of the two-group comparisons at +/− 1SD of the covariate for any significant treatment-group-by-covariate interaction. These were modeled results and not sub-group comparisons. In these follow-up analyses, all other covariates and any of the other treatment-group-by-covariate interactions that were statistically significant were retained in the models.

We conducted multiple comparisons using a common control group; consequently, the danger of Type I error (finding an effect by chance) was inflated. At the same time, we were examining whether relatively brief, albeit intensive, interventions might have a positive effect on fairly insensitive standardized measures; thus, we also had the danger of Type II error (failing to identify a significant effect). We mitigated both types of error to the extent possible. Following the standards of the What Works Clearinghouse (nd), we applied the Benjamini-Hochberg (BH) correction (Benjamini & Hochberg, 1995) to outcomes within a domain (i.e., vocabulary [EOWPVT, CELF-EV, WJ AcKnow], syntax/listening comprehension [CELF-S, OWLS, CELF-CFD, TNL], reading comprehension [Gates, TOSREC], decoding [WJ LWID, TOWRE-SWE, TOWRE-PDE]) (Lonigan, Burgess, & Anthony, 2000; Lonigan & Milburn, 2017). The BH correction controls the false-discovery rate instead of familywise error; therefore, it is less conservative than the Boneferroni method, limiting Type II error while providing adequate protection against Type I error. We computed BH corrected significance levels at both the conventional level of statistical significance (i.e., p < .05) and to detect marginal effects (i.e., p < .10). In the tables, we report the actual p-values for all contrasts with p-values of p < .10; however, only those that achieved significance following BH correction are marked as significant (*) or marginally significant (~). Finally, we considered whether the effect sizes were educationally meaningful following Hill et al. (2008), where educationally meaningful effect sizes for third and fourth graders range from 0.24 to 0.48 with a mean of 0.36.

Impacts on Outcomes for Third-Grade Children

Overall contrasts.

Posttest scores, adjusted for covariates in the final model, and effect sizes (Hedges g) for the two-group comparisons for third-grade children are shown in Table 2. Of note, once the BH correction was applied, none of the contrasts were significant. However, some of the effects can be considered educationally meaningful. For example, for COMPASS, there was a treatment effect (g) of .31 on the Inconsistency Detection Task (i.e., comprehension monitoring), as hypothesized. For Enacted RC, there was an educationally meaningful effect of treatment (.33) on the EOWPVT (i.e., vocabulary), with children who received the intervention scoring higher than children in the BAU condition.

Table 2.

Descriptive statistics for posttest scores and effect sizes for two-group comparisons for third-grade children.

Intervention Groups
BAU COMPASS ERC LIM Effect Sizes
Outcome Adj. M (SD) Adj. M (SD) Adj. M (SD) Adj. M (SD) Compass vs BAU (p) ERC vs BAU (p) LIM vs BAU (p)
EOWPVT 84.67 (10.51) 86.26 (12.62) 88.22 (11.18) 86.35 (11.13) .14 .33 (.012) .15
CELF EV 38.14 (6.49) 38.66 (6.12) 38.98 (5.78) 38.29 (5.93) .08 .14 .02
CASL S 27.74 (5.82) 28.46 (6.14) 28.71 (5.45) 28.21 (4.80) .12 .17 .09
CELF CFD 41.25 (6.77) 42.19 (6.32) 41.78 (6.47) 40.29 (6.74) .14 .08 −.14
OWLS 84.72 (8.36) 85.09 (10.22) 85.53 (9.46) 85.49 (10.09) .04 .09 .08
TNL 29.73 (3.81) 30.25 (4.31) 30.11 (4.02) 29.88 (3.74) .13 .10 .04
WJ AcKnow 15.47 (1.51) 15.53 (1.53) 15.69 (1.57) 15.69 (1.24) .04 .14 .16
InconDetect 15.36 (2.88) 16.27 (2.90) 15.56 (3.08) 14.83 (3.11) .31 (.065) .07 −.18
TOSREC 22.10 (6.09) 22.37 (6.46) 22.35 (6.21) 22.33 (6.86) .04 .04 .04
GATES 26.93 (8.44) 26.57 (9.22) 26.14 (8.46) 26.15 (7.81) −.04 −.09 −.10
WJ LIWID 48.35 (5.78) 47.73 (6.38) 48.08 (6.12) 47.95 (7.79) −.10 −.05 −.06
TOWRE SWE 61.08 (10.57) 61.43 (11.20) 60.55 (10.91) 62.83 (8.60) .03 −.05 .18
TOWRE PDE 29.78 (11.01) 28.92 (11.50) 29.88 (11.96) 29.52 (10.56) −.08 .01 −.02

Note. No significant effects after correcting for multiple comparisons. Effect sizes in bold are for constructs that were hypothesized to be impacted by the intervention. BAU is the Business as Usual control, ERC is Enacted Reading Comprehension, and LIM is Language in Motion.

Follow-up contrasts: Language outcomes.

There were intervention-condition-by-covariate interactions, indicating that the effect of the intervention was not consistent among students based on their incoming skills. Many of the potentially educationally meaningful effects were not significant after the BH correction (see Table 3 and Figure 2). Highlighting those effects that were educationally meaningful and significant, we found that, for example, the effect of Enacted RC on vocabulary (the EOWPVT) was stronger for children who had lower EOWPVT scores prior to the start of the intervention than for children who had higher EOWPVT scores prior to the start of the intervention; the effect size for children with low vocabulary scores was large (.68).

Table 3.

Differential impacts of intervention conditions by level of moderator (in parentheses) for language-related outcomes for third-grade children.

Adj. M for Intervention Group Effect Size
Outcome (Moderator) Value of Moderator BAU COMPASS ERC LIM Compass vs BAU ERC vs BAU LIM vs BAU
−1 SD 77.50 77.63 84.85 78.20 .01 .68* (.0009) .06
EOWPVT (EOWPVT) M 84.67 86.27 88.22 86.36 .15 .33 (.012) .16
+1 SD 91.84 94.91 91.59 94.52 .28 −.02 .25
−1 SD 35.34 37.34 37.49 35.60 .33 (.047) .35 (.049) .04
CELF EV (EOWPVT) M 38.13 38.65 38.98 38.28 .08 .14 .02
+1 SD 40.91 39.97 40.47 40.96 −.15 −.07 .01
−1 SD 39.19 39.43 41.06 39.94 .04 .28 .11
CELF CFD (EOWPVT) M 41.24 42.17 41.78 40.29 .14 .08 −.14
+1 SD 43.29 44.92 42.49 40.64 .25 −.12 −.39* (.0002)
−1 SD 36.53 41.56 39.14 36.73 .76* (.0002) .39 (.054) .03
CELF CFD (CELF CFD) M 41.25 42.19 41.78 40.29 .14 .08 −.14
+1 SD 45.97 42.81 44.42 43.85 −.48 (.015) −.23 −.31 (.065)
−1 SD 29.42 31.60 30.75 30.12 .56* (.006) .34 (.076) .19
TNL (Age) M 29.73 30.23 30.10 29.88 .13 .09 .04
+1 SD 30.04 28.86 29.46 29.65 −.30 −.15 .10
*

p-value significant after Benjamini-Hochberg correction (Benjamini & Hochberg, 1995); for ERC × EOWPVT effect at the mean, the BH critical value was equal to the p-value.

Effect sizes in bold are for constructs that were hypothesized to be impacted by the intervention.

BAU is the Business as Usual control, ERC is Enacted Reading Comprehension, and LIM is Language in Motion.

Figure 2.

Figure 2.

Third grade child X instruction interaction posttest effect sizes (g) relative to the business as usual control (BAU) for COMPASS, ERC, and LIM. The moderator is the pre-test, indicated in parentheses, falling one standard deviation below the mean (below), at the mean (mean), or one standard deviation above the mean (above). CELF EV and EOWPVT are vocabulary assessments, CELF CFD and TNL are measures of listening comprehension. Age is in years. ERC is Enacted Reading Comprehension, and LIM is Language in Motion.

For the Concepts and Following Directions subtest of the CELF (i.e., simple listening comprehension) there was a large effect (.76) of COMPASS for children who had lower scores on this measure prior to the start of the intervention, whereas there was no impact for children with average or low scores on this measure prior to the start of the intervention. There was also a significant negative effect of Language in Motion on Concepts and Following Directions subtest of the CELF for children who had higher scores on the measure and on vocabulary prior to the start of the intervention. Finally, there was a significant positive and educationally meaningful effect of COMPASS on the Test of Narrative Language (i.e., listening comprehension task that requires deep comprehension) for the youngest children, but there was no effect for children who were average age or older.

Follow-up contrasts: Word reading outcomes.

There was a significant positive effect of Language in Motion on the Sight-Word Efficiency subtest of the TOWRE for children who scored lower on the measure prior to the intervention (see Table 4), but no impacts for children who scored average or higher on the measure prior to the intervention. Additionally, there was a significant negative effect of Enacted RC on the Sight-Word Efficiency subtest of the TOWRE for children who scored lower on the EOWPVT (vocabulary) prior to the intervention, but no impacts for children who scored at the average level or higher on the EOWPVT prior to the intervention.

Table 4.

Differential impacts of intervention conditions by level of moderator (in parentheses) for code-related outcomes for third-grade children

Adj. M for Intervention Group Effect Size
Outcome (Moderator) Value of Moderator BAU COMPASS ERC LIM Compass vs BAU ERC vs BAU LIM vs BAU
−1 SD 21.10 19.05 21.74 22.71 −.18 .06 .15
TOWRE PDE (TOWRE PDE) M 29.78 28.92 29.88 29.52 −.07 .01 −.02
+1 SD 38.47 38.80 38.02 36.34 .03 −.04 −.20
−1 SD 61.68 61.84 58.17 63.90 .01 −.33 .23
TOWRE SWE (EOWPVT) M 61.08 61.43 60.55 62.84 .03 −.05 .18
+1 SD 60.48 61.01 62.92 61.77 .05 .23 .13
−1 SD 52.58 54.24 51.94 57.05 .15 −.06 .47*
TOWRE SWE (TOWRE SWE) M 61.08 61.43 60.56 62.84 .03 −.05 .18
+1 SD 69.59 68.62 69.17 68.62 −.09 −.04 −.10
*

p-value significant after Benjamini-Hochberg correction.

BAU is the Business as Usual control, ERC is Enacted Reading Comprehension, and LIM is Language in Motion.

Impacts on Outcomes for Fourth-Grade Children

Overall contrasts.

For fourth graders, for Enacted RC and TEXTS (see Table 5), the largest effect size was 0.25, which might be considered educationally meaningful for this age range. However, after the BH correction, it was not statistically significant.

Table 5.

Descriptive statistics for posttest scores and effect sizes for two-group comparisons for fourth-grade children

Intervention Groups
BAU ERC TEXTS Effect Size
Adj. M (SD) Adj. M (SD) Adj. M (SD) ERC vs BAU (p) TEXTS vs BAU (p)
EOWPVT 91.44 (10.82) 92.45 (11.44) 93.25 (11.44) .09 .16
CELF EV 40.45 (5.68) 41.00 (5.95) 40.84 (5.70) .09 .07
CASL S 31.76 (5.32) 31.00 (4.75) 32.29 (4.38) −.15 .11
CELF CFD 44.03 (6.28) 44.17 (5.21) 44.52 (4.63) .02 .09
OWLS 88.74 (10.13) 89.16 (8.65) 91.20 (9.50) .04 .25 (.049)
InconDetect 15.94 (2.54) 15.71 (2.77) 15.92 (2.70) −.09 −.01
TNL 31.01 (3.63) 30.70 (3.29) 31.15 (3.38) −.09 .04
WJ AcKnow 15.99 (1.87) 16.28 (1.42) 16.33 (1.58) .17 .20 (.067)
TOSREC 28.08 (7.66) 28.38 (7.92) 27.56 (6.80) .04 −−.07
GATES 25.17 (8.54) 24.48 (8.86) 24.53 (7.86) −.08 −.08
WJ LIWID 51.85 (6.73) 51.80 (5.72) 51.40 (5.93) −.01 −.07
TOWRE SW 65.23 (12.00) 66.99 (9.63) 65.45 (9.85) .16 .02
TOWRE NW 32.89 (11.19) 32.28 (11.34) 31.46 (10.60) −.05 −.13

Note. No significant effects after applying the Benjamini-Hochberg correction. BAU is the Business as Usual control, ERC is Enacted Reading Comprehension, and TEXTS is Teaching Expository Text Structures.

Follow-up contrasts: Language outcomes.

There were also intervention-by-covariate interactions for fourth graders. Again, many of the potentially educationally meaningful effect sizes were not significant after the BH correction. We highlight here those that were. There were significant and educationally meaningful effects for both Enacted RC and TEXTS on Academic Knowledge for students who had lower scores on the measure prior to the start of the intervention. There was no significant effect for students with average scores on the measure prior to the start of the TEXTS and Enacted RC interventions. There was a significant negative effect of Enacted RC for students with strong academic knowledge skills at the beginning of the intervention.

Follow-up contrasts: Word reading outcomes.

There was an educationally meaningful effect of Enacted RC on the Sight-Word Efficiency subtest of the TOWRE for older children but no effects for children of average age or younger for their grade (see Table 7). In contrast, on the Letter-Word Identification, there was a positive effect of Enacted RC for younger children, and a negative effect for older children.

Table 7.

Differential impacts of intervention conditions by level of moderator (in parentheses) for code-related outcomes for fourth-grade children

Adj. M for Intervention Group Effect Size
Outcome (Moderator) Value of Moderator BAU ERC TEXTS ERC vs BAU TEXTS vs BAU
−1 SD 52.60 51.80 52.33 −.13 −.04
WJ LWID (Age) M 51.85 51.80 51.40 −.01 −.07
+1 SD 51.09 51.79 50.46 .11 −.10
−1 SD 51.23 52.50 51.47 .20* .04
WJ LWID (EOWPVT) M 51.85 51.80 51.40 −.01 −.07
+1 SD 52.47 51.10 51.32 −.22* −.18+
−1 SD 64.23 67.07 66.84 .26+ .24+
TOWRE SWE (EOWPVT) M 65.24 66.99 65.44 .16 .02
+1 SD 66.26 66.91 64.04 .06 −.20
−1 SD 67.41 66.36 66.14 −.10 −.11
TOWRE SWE (Age) M 65.24 66.99 65.46 .16 .02
+1 SD 63.07 67.61 64.78 .41*** .15
***

p < .001

*

p < .05

+

p < .10.

BAU is the Business as Usual control, ERC is Enacted Reading Comprehension, and TEXTS is Teaching Expository Text Structures.

Discussion

This study was guided by three research questions. In the first, to what extent do the Comprehension Tools for Teachers component interventions impact the linguistic and metacognitive skills they were designed to improve, we found that for the most part, our relatively brief (10–12 weeks) but intensive (4 days/week) component interventions were generally not effective for many students in improving targeted skills assessed using standardized measures (see Figure 4 for a graphical representation of the findings). Although there were a few scattered educationally meaningful effect sizes (Hill, Bloome, Black, & Lipsey, 2008), there were no main effects for any of the interventions once a BH correction was applied. Instead, the targeted interventions were effective for improving hypothesized skills but only for students with generally weaker incoming skills.

Figure 4.

Figure 4.

Graphical presentation of results for 3rd and 4th grade combined. Solid lines are educationally meaningful (g > 0.24) but not significant effects, long dash lines are interaction effects (i.e., the intervention worked for some children), and light dotted lines indicate the effect was not significantly different from 0. Coefficients represent main effect sizes (g) for either 3rd or 4th grade. ERC is Enacted Reading Comprehension, and LIM is Language in Motion.

This finding emphasizes the importance of our second question: To what extent are there child characteristic X treatment interaction effects? Acknowledging that there may be other reasons for the moderating effects besides the CTT interventions, the answer to this question proved to be complex but promising (see Figures 3 and 4). Keeping in mind that all of the students in the study scored below mean age/grade expectations on the vocabulary assessment (standard score < 98), the child X instruction interactions generally followed our hypotheses (see Figure 4). For example, as anticipated, COMPASS students, compared to the BAU students, had stronger listening comprehension but this was the case only for students with weaker initial skills in third grade. Plus, COMPASS predicted stronger comprehension of literate language for younger children in third grade (i.e., 1 SD below mean) with no effect on older third graders. We wonder whether younger third graders are still developing comprehension of literate language and thus key elements of the COMPASS lessons supported this development. In contrast, these skills may have been already developed for older children and so COMPASS had no effect on comprehension of literate language for older third graders. We saw a similar trend for Enacted RC, with positive effects for younger students and null to negative effects for older students through 4th grade. Enacted RC was also more effective in improve vocabulary for students with weaker vocabulary skills compared to students with stronger vocabulary skills. Patterns of child X instruction interactions for fourth grade followed predicted patterns--TEXTS and Enacted RC were more effective in improving academic knowledge for students with weaker initial skills but not for students with typical and stronger skills. TEXTS was also effective in improving listening comprehension (CELF) for students with weaker initial skills but not for students with stronger skills.

Figure 3.

Figure 3.

Fourth grade child X instruction interaction posttest effect sizes (g) relative to the business as usual control (BAU) for ERC and TEXTS. The moderator is the pre-test, indicated in parentheses, falling one standard deviation below the mean (below), at the mean (mean), or one standard deviation above the mean (above). AcKnow is a measure of academic knowledge; CELF is a measure of listening comprehension. ERC is Enacted Reading Comprehension, and.

With regard to word reading, there were unexpected child X Instruction effects of the Language in Motion and Enacted RC interventions on silent word reading. Language in Motion (3rd grade only) had a positive treatment effect on silent word reading for children with weaker word reading skills. For older fourth graders participating in Enacted RC, there was a positive effect on silent word reading compared to the BAU group. We conjecture this may have been related to the amount of reading required of the children. For example, in Enacted RC children read the entire chapter book, A Single Shard (Park, 2001).

These consistent findings of child X instruction interaction effects in the moderation analyses, even for a somewhat homogeneous group (i.e., vocabulary < 98 Standard Score), are notable. If we consider only the children with more severely delayed language skills (i.e., > 1SD below the sample mean), the pattern of findings remains similar--the component interventions were generally effective in improving targeted skills for students with weaker skills. At the same time, some of the interventions had negative effects for children with stronger skills--that is, we might assume that they would have been better off staying in the classroom rather than participating in the intervention. The causes for these unexpected results are unclear, but it appears that the targeted skills were not matched for the students’ performance levels for the given outcomes. Our understanding of child X instruction interactions and how to personalize (or individualize or differentiate) the instruction provided is improving but still has a ways to go (Connor & Morrison, 2016).

Ultimately, the component interventions were designed to improve reading comprehension and other key component skills (our third research question). Unfortunately, the answer to this question was null--none of the interventions had an effect on either measure of reading comprehension, and any educationally meaningful treatment effects observed were generally confined to hypothesized targeted skills. Overall, the effects on key linguistic and metacognitive components of comprehension did not immediately generalize to stronger reading comprehension skills. Thus, taken with our findings for our first research question, we found that any learning was highly specific to what was taught, were effective only for some children, and did not generally transfer to other skills, including to reading comprehension. Limited generalization to other skills is consistent with evidence from other areas of research showing that training on specific skills often does not result in broader impacts on performance (e.g., “brain training” games improve specific game-related skills, but do not provide general benefits to cognitive performance; Simons et al., 2016). Specificity of learning is an important concept with a long history in psychology (e.g., Woodworth & Thorndike, 1901), yet one we are likely to overlook as we design and test interventions and theories.

Results for the research questions together also suggest that there is limited support for component intervention models of reading comprehension, at least with our current intervention design (e.g., 10–12 weeks of intensive instruction on specific components). Whereas the evidence is overwhelming that the targeted skills are associated with reading comprehension (Cain & Oakhill, 2007; Hulme & Snowling, 2011; Nation, Clarke, Marshall, & Durand, 2004) and all of the measures in this study were positively correlated, it was not the case that improving the components skills in a targeted and specific way improved reading comprehension. New theoretical models are needed and it was these findings and others that led us to the development of the lattice model of reading comprehension (Connor, 2016a; Connor et al., 2014), to which we will return.

These findings also may help to explain some of the more disappointing findings regarding multi-tiered systems of support (MTSS), which attempt to address individual student differences through a tiered system, with benchmarks set for when students receive more intensive tier 2 and 3 interventions. Again, we consider the interventions presented here to be tier 2 interventions, where tier 1 is general classroom instruction. A large national study, using regression discontinuity, revealed null and negative effects of placement in tier 2 interventions for students who just made or just missed the schools’ benchmark for placement in tier 2 (Balu et al., 2015). In reviewing the participating schools’ benchmarks, we noted that many schools set their benchmark at the 40th percentile on standardized assessments. This is comparable to our benchmark for inclusion in this study. Yet, we found that the interventions were generally more effective for students with much weaker skills (closer to the 20th percentile). More careful consideration of multi-tiered systems of support benchmarks is warranted, as is thinking about how to help general education teachers personalize (differentiate, individual) both tier 1 classroom instruction and tier 2 interventions so that we can better serve children who bring diverse constellations of skills and backgrounds to the classroom and preserve intervention resources for children who need it most.

Limitations

There are several limitations that should be considered when interpreting these results. First, we conducted many analyses and applied a Benjamini-Hochberg correction. Still some effects might be by chance. The CTT component interventions were more likely to be effective for the component skills targeted, particularly for students with weaker incoming skills. For example, COMPASS, with its emphasis on comprehension monitoring and text structure via story retell, had educationally meaningful effects on inconsistency detection and interaction effects on listening comprehension, vocabulary, and comprehension of literate language relative to the BAU group. A second limitation is that, with one exception, no researcher-developed proximal measures were used in this study because the evaluation protocol was already lengthy. The decision was made when developing the study to use only well-validated standardized assessments. To be included in this study, each of the interventions had to demonstrate strong effects on proximal measures (Al Otaiba, Connor, & Crowe, in press; Connor et al., 2014; Kaschak et al., 2017; Kim & Phillips, 2014; Phillips et al., 2016). Finally, the reading comprehension measures, and some of the language measures we selected may not have been sensitive to the language structures we targeted. As can be seen in Tables S.1 and S.2, none of the correlations between the outcome measures were moderate or large. We saw similar correlations with the TOSREC.

Support for our original component model of reading was limited because, even when the intervention improved component skills, this did not generalize to improved reading comprehension. Improved skills also did not generalize to other language and metacognitive processes. However, other aspects of the model were supported—explicitly teaching language and metacognitive processes to students appeared to be somewhat effective in improving the targeted component skills for those students with weaker skills.

Further belying the component model, in other studies conducted by Reading For Understanding researchers, oral language appears to comprise two highly correlated factors--a semantic or vocabulary factor and a syntax/listening comprehension (or higher order) factor (Language and Reading Research Consortium, 2015; Lonigan & Milburn, 2017). Taking the findings of this study and the language-factor studies together, we conjecture that a more complete model of reading comprehension should consider reciprocating effects among text-specific, linguistic, social, and cognitive factors, that interact with instruction (i.e., child X instruction interactions) to impact children’s reading for understanding (i.e., reading comprehension). Improving reading comprehension, in turn, influences children’s developing text-specific, linguistic, social, and cognitive skills. There is emerging support for this model (Connor, 2016a; Connor et al., 2016; Connor et al., 2014). If one envisions this lattice of processes, instruction, and developing skills as a taut net, then improving only one component skill would be like plucking just one node of the net – the node would spring back to align with the rest of the net. Perhaps improving multiple nodes of the net simultaneously (e.g., multiple cognitive and linguistic interventions) might prevent the snapping-back action and, instead, improve the entire network of developing skills and processes. That is, combining effective component interventions might be more effective than providing more focused single component interventions, particularly for children with weaker language and literacy skills.

In summary, whereas the overall results of the study revealed more null than positive impacts of the CTT interventions on language and metacognitive skills related to reading comprehension, these results have implications for theory, future research, and practice. As has been found before, improving reading comprehension continues to prove more difficult than anticipated. Still, these short but intensive CTT tier 2 interventions were effective in improving some of their targeted skills, for some children, as assessed by fairly insensitive standardized measures. One might argue that improving language and metacognitive skills, even if there is no direct effect on reading comprehension, is a worthwhile endeavor--particularly because most third and fourth graders, including those with weaker oral language skills, typically do not receive specific language and metacognitive interventions. Notably, another study (Clarke, Snowling, Truelove, & Hulme, 2010) did find improving components of oral language positively impacted reading comprehension. The age range for this study was similar to ours but the criteria for entry to the Clark et al. study was different--students were selected into the study if they had fairly strong decoding skills but weak reading comprehension skills. Perhaps had we selected students using reading comprehension as well as vocabulary in our criteria, our results might have aligned with Clark et al. Moreover, Clark and colleagues’ effective interventions included multiple components of oral language including vocabulary, figurative language, and spoken narrative, which again suggests that multi-component interventions may be more effective than more targeted intervention of a few components. We are now testing this hypothesis in studies by combining component interventions for pre-kindergarten and kindergarten students, and for second graders. Finally, it is clear that new and more complex theories of the acquisition of reading comprehension, which include instruction and intervention, are needed. Although the component models are highly appealing, the results of this and other studies suggest that they are missing important active ingredients and wrongly assume skills will transfer to other related skills. We have proposed the lattice model, which has emerging support. However, more research, particularly intervention research to develop and test new models, is needed if we are to help more students proficiently read for understanding.

Supplementary Material

Supp1

Table 6.

Differential impacts of intervention conditions by level of moderator (in parentheses) for language-related outcomes for fourth-grade children

Adj. M for Intervention Group Effect Size
Outcome (Moderator) Value of Moderator BAU ERC TEXTS ERC vs BAU TEXTS vs BAU
−1 SD 41.63 42.83 43.81 .21 .40 (.030)
CELF CFD (CELF CFD) M 44.03 44.17 44.52 .02 .09
+1 SD 46.43 45.52 45.24 −.16 −.22
−1 SD 14.78 15.94 15.62 .69* (.00002) .48* (.003)
WJ AcKnow (WJ AcKnow) M 16.00 16.28 16.33 .17 .19 (.067)
+1 SD 17.21 16.63 17.05 −.35* (.029) −.09
−1 SD 32.25 30.83 31.80 −.41 (.029) −.13
TNL (Age) M 31.02 30.70 31.16 −.09 .04
+1 SD 29.80 30.58 30.52 .22 .20
−1 SD 26.43 27.71 28.12 .16 .23
TOSREC (EOWPVT) M 28.05 28.37 27.76 .04 −.04
+1 SD 29.68 29.03 27.41 −.08 −.31 (.043)
*

p-value significant after applying Benjamini-Hochberg correction. Actual p-values, where p ≥ .10, are displayed.

Effect sizes in bold are for constructs that were hypothesized to be impacted by the intervention.

BAU is the Business as Usual control, ERC is Enacted Reading Comprehension, and TEXTS is Teaching Expository Text Structures.

Acknowledgments

We would like to thank the entire RFU and ISI Project teams, as well as the children, parents, teachers, and school administrators without whom this research would not have been possible. This study was primarily funded by the Reading for Understanding Network grant R305F100027 from the U.S. Department of Education, Institute of Education Sciences, and in part by R305N160050 and R305A170163, and in part by grants R01HD48539 and P50HD52150 from the Eunice Kennedy Shriver National Institute of Child Health and Human Development. The opinions expressed are ours and do not represent the views of the funding agencies.

References

  1. Al Otaiba S, Connor CM, & Crowe EC (in press). Promise and Feasibility of Teaching Expository Text Structure: A Primary Grade Pilot Study. Reading and Writing. [Google Scholar]
  2. Baker L (1984). Children’s effective use of multiple standards for evaluating their comprehension. Journal of Educational Psychology, 76, 588–597. [Google Scholar]
  3. Baker L (2008). Metacognition in comprehension instruction: What we’ve learned since NRP In Comprehension instruction: Research-based best practices (2nd ed.). (pp. 65–79): Guilford Press, New York, NY. [Google Scholar]
  4. Baker L, & Stein N (1981). The development of prose comprehension skills In S. C & & Hayes B (Eds.), Children’s prose comprehension: Research and practice (pp. 7–43). Newark, DE: International Reading Association. [Google Scholar]
  5. Balu R, Zhu P, Doolittle F, Schiller EP, Jenkins JR, & Gersten R (2015). Evaluation of response to intervention practices for elementary school reading. Retrieved from Washington DC: http://ies.ed.gov/ncee/pubs/20164000/pdf/20164000.pdf [Google Scholar]
  6. Benjamini Y, & Hochberg Y (1995). Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical Society. Series B (Methodological), 57(1), 289–300. [Google Scholar]
  7. Benson SE (2009). Understanding literate language: Developmental and clinical issues. Contemporary Issues in Communication Science and Disorders, 36, 174–178. [Google Scholar]
  8. Brownell R (2000). Expressive One-Word Picture Vocabulary Test - Third Edition. Novato, CA: Academic Therapy Publications. [Google Scholar]
  9. Caillies S, & Le Sourn-Bissaoui S (2008). Children’s understanding of idioms and theory of mind development. Developmental Science, 11(5), 703–711. doi: 10.1111/j.1467-7687.2008.00720.x [DOI] [PubMed] [Google Scholar]
  10. Cain K, & Oakhill J (2007). Reading comprehension difficulties: Correlates, causes, and consequences In Cain K & Oakhill J (Eds.), Children’s comprehension problems in oral and written language: A cognitive perspective. NY: Guilford Press. [Google Scholar]
  11. Cain K, Oakhill J, & Bryant P (2004). Children’s reading comprehension ability: Concurrent prediction by working memory, verbal ability, and component skills. Journal of Educational Psychology, 96(1), 31–42. [Google Scholar]
  12. Cain K, Towse AS, & Knight RS (2009). The development of idiom comprehension: An investigation of semantic and contextual processing skills. Journal of Experimental Child Psychology, 102, 280–298. doi: 10.1016/j.jecp.2008.08.001 [DOI] [PubMed] [Google Scholar]
  13. Carrow-Woolfolk, E (1995). Oral and Written Language Scales: American Guidance Services. [Google Scholar]
  14. Carrow-Woolfolk E (2008). Comprehensive Assessment of Spoken Language. Los Angeles, CA: Western Psychological Services. [Google Scholar]
  15. Catts HW, Fey M, Zhang X, & Tomblin B (1999). Language basis of reading and reading disabilities: Evidence from a longitudinal investigation. Scientific Studies of Reading, 3(4), 331–361. [Google Scholar]
  16. Chall JS (1996). Stages of reading development (2nd ed.). Orlando, FL: Harcourt Brace. [Google Scholar]
  17. Chen Q-S (2009). Metacomprehension monitoring and regulation in reading comprehension. Acta Psychologica Sinica, 41(8), 676–683. doi: 10.3724/SP.J.1041.2009.00676 [DOI] [Google Scholar]
  18. Clarke PJ, Snowling MJ, Truelove E, & Hulme C (2010). Ameliorating Children’s Reading-Comprehension Difficulties. Psychological Science, 21(8), 1106–1116. doi: 10.1177/0956797610375449 [DOI] [PubMed] [Google Scholar]
  19. Connor CM (2016a). A lattice model of the development of reading comprehension. Child Development Perspectives, 10(4), 269–274. doi: 10.1111/cdep.12200 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Connor CM (Ed.) (2016b). The cognitive development of reading and reading comprehension. London: Routledge. [Google Scholar]
  21. Connor CM, Day SL, Phillips B, Sparapani N, Ingebrand SW, McLean L, … Kaschak MP (2016). Reciprocal Effects of Self-Regulation, Semantic Knowledge, and Reading Comprehension in Early Elementary School. Child Development, 87(6), 1813–1824. doi: 10.1111/cdev.12570 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Connor CM, & Morrison FJ (2016). Individualizing student instruction in reading implications for policy and practice. Policy Insights from the Behavioral and Brain Sciences, 3(1), 54–61. doi: 10.1177/2372732215624931 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Connor CM, Phillips B, Kaschak M, Apel K, Kim Y-S, Al Otaiba S, … Lonigan C (2014). Comprehension tools for teachers: Reading for understanding from prekindergarten through fourth grade. Educational Psychology Review, 26(3), 379–401. doi: 10.1007/s10648-014-9267-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Efklides A, & Misailidi P (Eds.). (2010). Trends and Prospects in Metacognitive Research. NY: Springer. [Google Scholar]
  25. Elliot-Faust DJ, & Pressley M (1986). How to teach comparison processing to increase children’s short- and long-term listening comprehension monitoring Journal of Educational Psychology, 78, 27–33. [Google Scholar]
  26. Fitzgerald J (1989). Research on stories: Implications for teachers In Muth KD (Ed.), Children’s comprehension of text: Research into practice (pp. 2–36). Newark, DE: . International Reading Association. [Google Scholar]
  27. Flavell JH (1979). Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. American Psychologist, 34(10), 906–911. doi: 10.1037/0003-066X.34.10.906 [DOI] [Google Scholar]
  28. Gardner MF (1990). Expressive One Word Picture Vocabulary Test, EOWPVT (Revised ed.). Novato, CA: Academic Therapy Publications. [Google Scholar]
  29. Gersten R, & Dimino JA (2006). RTI (Response to Intervention): Rethinking Special Education for Students with Reading Difficulties (Yet Again). Reading Research Quarterly, 41(1), 99-99–108. [Google Scholar]
  30. Gillam RB, & Pearson NA (2004). Test of Narrative Language. Austin TX: Pro-Ed. [Google Scholar]
  31. Glenberg AM, Brown M, & Levin JR (2007). Enhancing comprehension in small reading groups using a manipulation strategy. Contemporary Educational Psychology, 32, 389–399. [Google Scholar]
  32. Glenberg AM, Gutierrez T, Levin JR, Japunitich S, & Kaschak MP (2004). Activity and imagined activity can enhance young children’s reading comprehension. Journal of Educational Research, 96(3), 424–436. [Google Scholar]
  33. Glenberg AM, & Kaschak MP (2002). Grounding language in action. Psychonomic Bulletin and Review, 9, 558–565. [DOI] [PubMed] [Google Scholar]
  34. Graesser A, Golding JM, & Long DL (1991). Narrative representation and comprehension In Barr R, Kamil ML, Mosenthal P, & Pearson PD (Eds.), Handbook of reading research (Vol. II, pp. 171–205). White Plains, NY: Longman. [Google Scholar]
  35. Hill C, Bloome H, Black AR, & Lipsey MW (2008). Empirical benchmarks for interpreting effect sizes in research. Child Development Perspectives, 2(3), 172–177. doi: 10.1111/j.1750-8606.2008.00061.x [DOI] [Google Scholar]
  36. Hoover WA, & Gough PB (1990). The simple view of reading. Reading and Writing, 2(2), 127 – 160. [Google Scholar]
  37. Hulme C, Nash HM, Gooch D, Lervåg A, & Snowling MJ (2015). The Foundations of Literacy Development in Children at Familial Risk of Dyslexia. Psychological Science, 26(12), 1877–1886. doi: 10.1177/0956797615603702 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Hulme C, & Snowling MJ (2011). Children’s reading comprehension difficulties: Nature, causes, and treatments. Current Directions in Psychological Science, 20(3), 139–142. [Google Scholar]
  39. Huttenlocher J, Haight W, Bryk A, Seltzer M, & Lyons T (1991). Early vocabulary growth: Relation to language input and gender. Developmental Psychology, 27(2), 236–248. [Google Scholar]
  40. James-Burdumy S, Mansfield W, Deke J, Carey N, Lugo-Gil J, Hershey A, … Faddis B (2009). Effectiveness of Selected Supplemental Reading Comprehension Interventions: Impacts on a First Cohort of Fifth-Grade Students. NCEE 2009-4032. Retrieved from http://search.proquest.com/docview/61882547?accountid=4840
  41. Joshi RM, & Aaron PG (2000). THE COMPONENT MODEL OF READING: SIMPLE VIEW OF READING MADE A LITTLE MORE COMPLEX. Reading Psychology, 21(2), 85–97. doi: 10.1080/02702710050084428 [DOI] [Google Scholar]
  42. Kaschak MP, Connor CM, & Dombek JL (2017). Enacted Reading Comprehension: Using bodily movement to aid the comprehension of abstract text content. PLoS ONE, 12(1), e0169711. doi: 10.1371/journal.pone.0169711 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Kim Y-S (2015). Language and cognitive predictors of text comprehension: Evidence from multivariate analysis. Child Development, 86, 128–144. doi: 10.1111/cdev.12293 [DOI] [PubMed] [Google Scholar]
  44. Kim Y-S (2016). Direct and mediated effects of language and cognitive skills on comprehension or oral narrative texts (listening comprehension) for children. Journal of Experimental Child Psychology, 141, 101–120. doi: 10.1016/j.jecp.2015.08.003 [DOI] [PubMed] [Google Scholar]
  45. Kim Y-S (2017). Why the simple view of reading is not simplistic: Unpacking the simple view of reading using a direct and indirect effect model of reading (DIER). Scientific Studies of Reading, 21, 310–333. doi: 10.1080/10888438.2017.1291643 [DOI] [Google Scholar]
  46. Kim Y-S, & Phillips B (2014). Cognitive correlates of listening comprehension. Reading Research Quarterly, 49, 269–281. doi: 10.1002/rrq.74 [DOI] [Google Scholar]
  47. Kim Y-S, & Phillips B (2016). 5 minutes a day: An exploratory study of improving comprehension monitoring for prekindergartners from low income families. Topics in Language Disorder,, 36, 356–367. [Google Scholar]
  48. Kintsch W (1998). Comprehension: A paradigm for cognition. New York: Cambridge Unviersity Press. [Google Scholar]
  49. Lakoff G, & Johnson M (1980). Metaphors we live by. Chicago: University of Chicago Press. [Google Scholar]
  50. Language and Reading Research Consortium. (2015). The dimensionality of language in young children. Child Development, 86(6), 1948–1965. doi: 10.1111/cdev.12450 [DOI] [PubMed] [Google Scholar]
  51. Locke JL (1997). A Theory of Neurolinguistic Development. Brain and Language, 58, 265–326. [DOI] [PubMed] [Google Scholar]
  52. Lonigan CJ, Burgess SR, & Anthony JL (2000). Development of emergent literacy and early reading skills in preschool children: Evidence from a latent variable longitudinal study. Developmental Psychology, 36, 596–613. [DOI] [PubMed] [Google Scholar]
  53. Lonigan CJ, & Milburn TF (2017). Dimensionality of oral language skills (Lonigan & Milburn, 2017). [DOI] [PubMed]
  54. MacGinitie WH, & MacGinitie RK (2006). Gates-MacGinitie Reading Tests (4th ed.). Iowa City: Houghton Mifflin. [Google Scholar]
  55. Marulis LM, & Neuman SB (2010). The effects of vocabulary intervention on young children’s word learning: A meta-analysis. Review of Educational Research, 80(3), 300–335. doi: 10.3102/0034654310377087 [DOI] [Google Scholar]
  56. Megherbi H, & Ehrlich MF (2005). Language impairment in less skilled comprehenders: The on-line processing of anaphoric pronouns in a listening situation. Reading and Writing, 18(7), 715–753. [Google Scholar]
  57. Mellard DF, & Fall E (2012). Component Model of Reading Comprehension for Adult Education Participants. Learning Disability Quarterly, 35(1), 10–23. doi: 10.1177/0731948711429197 [DOI] [Google Scholar]
  58. Meyer BJF, & Poon LW (2001). Effects of structure strategy training and signaling on recall of text. Journal of Educational Psychology, 93, 141–159. [Google Scholar]
  59. Morrison FJ, Bachman HJ, & Connor CM (2005). Improving literacy in America: Guidelines from research. New Haven, CT: Yale University Press. [Google Scholar]
  60. Nation K, Clarke P, Marshall CM, & Durand M (2004). Hidden language impairments in children: Parallels between poor reading comprehension and specific language impairments? . Journal of Speech, Language, and Hearing Research, 47, 199–211. [DOI] [PubMed] [Google Scholar]
  61. National Assessment Governing Board. (2007). Reading framework fo the 2007 National Assessment of Educational Progress. Washington DC: U.S. Department of Education. [Google Scholar]
  62. Oakhill J, Cain K, & Bryant PE (2003). The dissociation of word reading and text comprehension: Evidence from component skills. Language and Cognitive Processes, 18, 443–468. [Google Scholar]
  63. Park LS (2001). A single shard. NY: Houghton Mifflin Harcourt Publishing Co. [Google Scholar]
  64. Pearson BD, & Gallagher MC (1983). The instruction of reading comprehension. Contemporary Educational Psychology, 8, 317–344. [Google Scholar]
  65. Phillips BM, Tabulda G, Ingrole SA, Burris PW, Sedgwick TK, & Chen S (2016). Literate Language Intervention With High-Need Prekindergarten Children: A Randomized Trial. Journal of Speech, Language, and Hearing Research, 59(6), 1409–1420. doi: 10.1044/2016_JSLHR-L-15-0155 [DOI] [PubMed] [Google Scholar]
  66. Rand Study Group, & Snow CE (2001). Reading for understanding. Santa Monica, CA: RAND Education and the Science and Technology Policy Institute. [Google Scholar]
  67. Rapp DN, van den Broek P, McMaster K, Kendeou P, & Espin CA (2007). Higher-order comprehension processes in struggling readers: A perspective for research and intervention. Scientific Studies of Reading, 11(4), 389–312. doi: 10.1080/10888430701530417 [DOI] [Google Scholar]
  68. Ruffman T (1999). Children’s understanding of logical inconsistency. Child Development, 70, 872–886. [Google Scholar]
  69. Scarborough HS (1998). Early identification of children at risk for reading disablities: Phonological awareness and some other promising predictors In Shapiro BK, Accardo PJ, & Capute AJ (Eds.), Specific reading disability: a view of the spectrum (pp. 75–119). Timonium MD: York Press, Inc. [Google Scholar]
  70. Scarborough HS (2001). Connecting early language and literacy to later reading (dis)abilities: Evidence, theory, and practice In Neuman SB & Dickinson DK (Eds.), Handbook of early literacy research (pp. 97–110). New York: Guilford Press. [Google Scholar]
  71. Semel E, Wiig EH, & Secord WA (2003). Clinical evaluation of language fundamentals (CELF) (4th ed.). San Antonio, TX: PsychCorp. [Google Scholar]
  72. Simons DJ, Boot WR, Charness N, Gathercole SE, Chabris CF, Hambrick DZ, & Stine-Morrow EAL (2016). Do “brain-training” programs work? . Psychological Science in the Public Interest, 17, 103–186. [DOI] [PubMed] [Google Scholar]
  73. Skarakis-Doyle E, Dempsey L, & Lee C (2008). Identification of language comprehension impairments in preschool children. Language, Speech, and Hearing Services in Schools, 39, 54–65. [DOI] [PubMed] [Google Scholar]
  74. Stevens RJ, Van Meter P, & Warcholak ND (2010). The effects of explicitly teaching story structure to primary grade children. Journal of Literacy Research,, 42, 159–198. [Google Scholar]
  75. Tompkins GE, & McGee LM (1989). Teaching repetition as a story structure In Muth KD (Eds.), Children’s comprehension of text: Research into practice (pp. 59–78). Newark, DE:: International Reading Association. [Google Scholar]
  76. Torgesen JK, Wagner RK, & Rashotte CA (1999). Test of Word Reading Efficiency: TOWRE. Austin TX: Pro-Ed. [Google Scholar]
  77. van den Broek P, Risden K, Fletcher CR, & Thorlow R (1996). A “landscape view” of reading: Fluctuating patterns of activation and the construction of a stable memory representation In Brtitton BK & Graesser AC (Eds.), Models of understanding text (pp. 165–187). Mahweh, NJ: Lawrence Erlbaum Associates. [Google Scholar]
  78. Wagner RK, Torgesen JK, Rashotte CA, & Pearson NA (2010). TOSREC: Test of silent reading efficiency and comprehension: Pro-Ed. [Google Scholar]
  79. What Works Clearinghouse. (nd). WWC procedures and standards handbook, version 4. Retrieved from https://ies.ed.gov/ncee/wwc/Docs/referenceresources/wwc_standards_handbook_v4.pdf
  80. Williams JP, Hall KM, Lauer KD, Stafford B, DeSisto LA, & deCani J (2005). Expository text comprehension in the primary grade classroom. Journal of Educational Psychology, 97(4), 538–550. [Google Scholar]
  81. Williams JP, Stafford KB, Lauer KD, Hall KM, & Pollini S (2009). Embedding reading comprehension training in content-area instruction. Journal of Educational Psychology, 101(1), 1–20. doi: 10.1037/a0013152 [DOI] [Google Scholar]
  82. Woodcock RW, McGrew KS, & Mather N (2001). Woodcock-Johnson-III Tests of Achievement. Itasca, IL: Riverside. [Google Scholar]
  83. Woodworth RS, & Thorndike EL (1901). The influence of improvement in one mental function upon the efficiency of other functions. Psychological Review, 8, 247–261. [Google Scholar]
  84. Zipoli RP Jr. (2017). Unraveling difficult sentences: Strategies to support reading comprehension. Intervention in School and Clinic, 52, 218–227. [Google Scholar]
  85. Zwaan RA, & Taylor LJ (2006). Seeing, acting, understanding: Motor resonance in language comprehension. Journal of Experimental Psychology: General,, 135(1–11). [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supp1

RESOURCES