Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Apr 12.
Published in final edited form as: Learn Disabil Q. 2021 Feb 15;44(3):183–196. doi: 10.1177/0731948721989973

Examining the Reading and Cognitive Profiles of Students With Significant Reading Comprehension Difficulties

Philip Capin 1, Eunsoo Cho 2, Jeremy Miciak 3, Greg Roberts 1, Sharon Vaughn 1
PMCID: PMC9004597  NIHMSID: NIHMS1744458  PMID: 35418724

Abstract

This study investigated the word reading and listening comprehension difficulties of fourth-grade students with significant reading comprehension deficits and the cognitive difficulties that underlie these weaknesses. Latent profile analysis was used to classify a sample of fourth-grade students (n = 446) who scored below the 16th percentile on a measure of reading comprehension into subgroups based on their performance in word reading (WR) and listening comprehension (LC). Three latent profiles emerged: (a) moderate deficits in both WR and LC of similar severity (91%), (b) severe deficit in WR paired with moderate LC deficit (5%), and (c) severe deficit in LC with moderate WR difficulties (4%). Analyses examining the associations between cognitive attributes and group membership indicated students with lower performance on cognitive predictors were more likely to be in a severe subgroup. Implications for educators targeting improved reading performance for upper elementary students with significant reading difficulties were discussed.

Keywords: word reading, listening comprehension, reading comprehension deficits, cognitive difficulties, fourth graders


Reading is an essential skill for success in school and life. Thus, the persistent finding that many students in late elementary are not able to read and understand grade-level text represents a critical educational challenge (Biancarosa & Snow, 2006; National Assessment of Educational Progress, 2019). Despite widespread recognition of the scope and importance of this challenge, research on instructional practices and interventions to improve reading comprehension for students with reading difficulties in late elementary and beyond has failed to yield robust findings (for reviews, see Scammacca et al., 2015; Wanzek et al., 2010), in contrast to those observed in intervention research implemented with struggling readers in early elementary grades (for reviews, see Wanzek et al., 2016, 2018). Of particular concern to researchers and educators of students with the most significant reading problems are findings which suggest these students are the least responsive to intensive interventions (Fletcher et al., 2018). Vaughn and colleagues (2019), for instance, found that initial word reading was a critical predictor of reading intervention response for upper elementary struggling readers and that students with the most substantial word reading problems made the smallest gains in reading comprehension. This finding suggests that information about students’ initial levels of reading performance and specific areas of weakness (i.e., reading profiles) may have the potential to inform reading interventions decisions for students in the upper elementary.

Various theoretical models have been proposed to explain the sources of reading comprehension difficulties (e.g., Cromley & Azevedo, 2007; Perfetti, 1999). Despite differences in emphasis and structure, theoretical models of reading recognize the essential importance of accurate word reading and understanding linguistic input. This fundamental understanding of the reading process is summarized in an influential and well-validated theoretical model called the Simple View of Reading (SVR; Gough & Tunmer, 1986). The SVR posits that reading comprehension is the product of two interrelated but distinct component skills: accurate word reading and listening comprehension. Elegant in its parsimony, the SVR has proven a robust predictor of variance in individual reading comprehension across different ages (Catts et al., 2005; Kendeou et al., 2009; Tilstra et al., 2009), measures (e.g., Catts et al., 2015; Language and Reading Research Consortium & Chiu, 2018), clinical populations (Roch & Levorato, 2009), and languages (Florit & Cain, 2011; Kim, 2011). In initially describing the SVR, Gough and Tunmer (1986) posited that reading difficulties result from an “inability to decode, an inability to comprehend, or both” (p. 7) and that the SVR could be used to identify subgroups of struggling readers.

In the present study, we attempt to explore the SVR component skills in a novel and rigorous way to better understand individual differences among students with significant reading difficulties, including dyslexia. First, we identify profiles based on performance on SVR variables (word reading and listening comprehension) among late elementary students with significant reading comprehension deficits. This age represents a critical transition period, and we are unaware of previous studies that have attempted this. Our goal was to determine to what extent separable profiles may be identified, as suggested by Gough and Tunmer (1986), and what relative skill profiles mark these latent groups. Second, and assuming that latent profiles emerge based upon differences in relative skill levels on the SVR variables, we investigated what cognitive process theoretically related to reading comprehension might predict membership in these classes. These study aims will inform future intervention development, as struggling readers with different skill profiles may optimally benefit from interventions that differ in their relative focus on constituent reading skills.

Subgroups of Struggling Readers

Few studies have used rigorous statistical methods such as latent class or profile analysis to investigate the reading profiles of struggling readers in late elementary and beyond based on performance on component reading skills highlighted within the SVR. Such analyses may contribute to the design and selection of interventions, as intervention protocols may be tailored to address both the severity and specificity of deficits identified in distinct latent profiles. If struggling readers are primarily differentiated by the severity of their reading deficits (see Vellutino et al., 2004, 2007), intervention protocols may be best adjusted along dimensions of instructional intensity, with greater dosage for students with more severe reading deficits. In contrast, when profiles vary based on specificity (i.e., when profiles indicate specific weaknesses in one of the SVR component skills of reading, such as listening comprehension or word reading), intervention protocols may be best adjusted by identifying instructional foci that correspond to the specific component skill weakness (Johnson et al., 2010; Swanson, 1999). It is important to draw a distinction here between adjusting instructional foci based on measures of aptitude (e.g., working memory [WM], long-term memory) and specific skill profiles in reading (e.g., word reading). Researchers have been long clamored to uncover aptitude-by-treatment interactions (see, for example, Cronbach & Snow, 1977), yet this line of research has yielded only limited proof that interventions are differentially effective for students based on measures of aptitude (e.g., working and long-term memory; e.g., Burns et al., 2016). However, there is some empirical evidence to suggest adjusting instructional foci based on component reading skills leads to improved outcomes (e.g., Burns et al., 2018; Connor et al., 2004; McMaster et al., 2012; Szadokierski et al., 2017).

Previous research suggests the vast majority of struggling readers in the primary grades (K–3) demonstrate difficulties in both SVR components (e.g., Hoover & Gough, 1990). However, there is some research to suggest that older readers may differ in the specificity of their reading difficulties. Some researchers have suggested that there is a substantial proportion of struggling readers who demonstrate underperformance in reading comprehension despite adequate word reading skills. Although the prevalence of this profile seems to vary based on the methods and measures for classifying students (Keenan & Meenan, 2014), some have estimated that as many as 10% to 15% of all students demonstrate specific comprehension deficits (Aaron et al., 2008; Leach et al., 2003; Nation & Snowling, 1997; Stothard & Hulme, 1995; Torppa et al., 2007; Yuill & Oakhill, 1991).

A few studies have examined the specificity and severity of reading difficulties among struggling comprehenders in Grades 6 through 9 using rigorous statistical approaches that empirically identify latent subgroups within a sample (Brasseur-Hock et al., 2011; Clemens et al., 2017; Lesaux & Kieffer, 2010). These studies suggest that latent groups can be identified based on both the severity and specificity of component skill deficits. For example, Clemens et al. (2017) investigated the latent class structure within a sample of 180 middle school students with significant reading comprehension difficulties. The goal was to empirically identify the proportion of students with specific deficits in passage reading fluency and vocabulary or deficits in both areas. Four classes were identified. Two classes were identified based on the severity of their reading deficits: average vocabulary/average fluency (n = 8, 4%) versus low vocabulary/low fluency (n = 103, 57%). The remaining two classes were identified based on the specificity of their reading deficits: average vocabulary/low fluency (n = 41, 23%) versus low vocabulary/average fluency (n = 28, 16%). Results indicated that 96% of students exhibited a deficit in either reading fluency or vocabulary, with 80% of students demonstrating below-average fluency. In addition, most students (61%) did not exhibit a specific deficit, but rather displayed underachievement on both component skills (reading fluency and vocabulary).

In a similar study, Brasseur-Hock et al. (2011) conducted a latent class analysis with 195 ninth-grade students with below-average comprehension. Eight measures of component reading skills, including vocabulary, listening comprehension, word and text-level reading accuracy and fluency were included to identify latent classes. Consistent with Clemens et al. (2017), Brasseur-Hock and colleagues were able to identify distinct classes of struggling readers marked by both the severity and specificity of their deficits. A five-class solution included two classes marked primarily by the severity of their component skill deficits: (a) students with severe global weaknesses (n = 28, 14%) and (b) students with moderate global weaknesses (n = 71, 38%). In addition, three other groups marked by specific component skill deficits were identified: (c) students with specific language comprehension deficits (n = 21, 11%), (d) students with specific oral reading fluency deficits (n = 57, 31%), and (e) poor comprehenders with average component skills (n = 18, 10%). Like those reported by Clemens et al., the findings of Brasseur-Hock et al. suggest that the majority (52%) of struggling comprehenders are distinguished by deficits in both fluency and linguistic comprehension and are classified based on differences in the severity of their reading deficits. However, of the students identified with the most significant reading deficits (i.e., those with scores at the lowest level on the Kansas Reading Achievement test), 85% were identified with moderate or global weaknesses across the component skills of reading.

In contrast, Lesaux and Kieffer (2010) identified a larger proportion of struggling comprehenders with specific reading skill deficits in a sample that included a large number of language minority (LM) learners (n = 201) and a smaller number of native English speakers (n = 61) in Grade 6. The overall sample of the struggling readers examined in this study also demonstrated, on average, less severe reading comprehension deficits than those studied by Clemens et al. (2017) and Brasseur-Hock et al. (2011). Latent class analysis was conducted using performance on decoding, oral reading fluency, and vocabulary measures. Three classes were identified reflecting both the severity and specificity of deficits: (a) slow word callers (above-average decoding, below-average vocabulary and fluency skills; n = 158, 60%), (b) globally impaired readers (deficits in all skill domains; n = 56, 21%), and (c) automatic word callers (average decoding and fluency, low vocabulary; n = 48, 18%). Similar to Clemens et al., more than 80% of the students had below-average reading fluency skills; however, 78% of students had average decoding skills. Perhaps reflecting the substantial percentage of LM participants, all three subgroups were characterized by low vocabulary knowledge. Although the majority of struggling readers were found to have decoding skills in the average range, more than 80% of the students had below-average reading fluency skills.

Taken together, these studies suggest that latent subgroups of students with poor reading comprehension can be identified based on performance on SVR variables and other component reading skills. Class formation is most often differentiated by the severity of reading deficits and, less frequently, by the specificity of those deficits (Brasseur-Hock et al., 2011; Clemens et al., 2017). However, there is some disagreement among previous studies on the relative percentage of students who are classified as having specific reading deficits. Notably, no previous study utilizing latent profile analysis (LPA) has focused on students in the upper elementary grades, for whom relationships between component reading skills might be differentially predictive of reading comprehension because of ongoing transitions in reading tasks at this age.

Cognitive Predictors of Reading and Reading Comprehension

Previous studies that have evaluated latent classes of students with reading comprehension deficits have not evaluated whether these profiles differ on external dimensions, such as performance on common cognitive tasks associated with reading. Evaluation of performance on external dimensions provides empirical support for the underlying classification hypothesis (Miciak et al., 2016; Morris & Fletcher, 1988) and points toward potential intervention targets. We include cognitive variables implicated in the reading process and previously evaluated in studies investigating cognitive profiles of struggling readers (e.g., Cho et al., 2015; Fletcher et al., 2011; Miciak et al., 2014). These skills include phonological processing, rapid naming, verbal knowledge (VK), WM, nonverbal processing, and executive functioning (EF; planning). Cognitive interventions that do not involve print or numbers, such as WM trainings (Melby-Lervåg et al., 2016), have not been effective in improving academic performance in reading or math. Nevertheless, there is a growing body of research examining the efficacy of interventions that embed cognitive supports, such as WM (e.g., Fuchs et al., 2018) and self-regulation training (e.g., Vaughn et al., 2016) within the context of reading and math. Research findings that identify the cognitive correlates associated with subgroups of struggling readers may have the potential to improve individual risk identification and develop more effective interventions.

Research Questions and Hypotheses

No previous study has utilized rigorous statistical methods to determine the reading profiles of late elementary students with significant reading comprehension problems according to performance on important component reading skills (e.g., word reading and listening comprehension). Such an investigation is warranted because late elementary students with differences in the severity and specificity of their reading deficits may require intervention protocols that differ in intensity and focus. This represents the primary purpose of this article. In addition, we aim to evaluate whether subgroups vary in their reading comprehension performance and to what extent membership within latent classes can be predicted based on performance on important cognitive predictors of key reading skills—an important extension of previous latent class analyses with older students that would provide empirical support for the validity of the groups. Three research questions and hypotheses guide the study.

Research Question 1

Research Question 1 (RQ1): Do reading profiles emerge based on the severity and specificity of the component skills of reading?

Based on previous research incorporating latent class analyses with older readers with similar comprehension problems (Brasseur-Hock et al., 2011; Clemens et al., 2017; Lesaux & Kieffer, 2010), we hypothesize that multiple latent subgroups will be identified based on both the severity and specificity of their reading skills deficits. Based on previous research suggesting that struggling readers are more similar to primary grade readers insofar as they experience both significant word reading and listening comprehension difficulties (e.g., Cho et al., 2019), we expect that most poor comprehenders in Grade 4 will demonstrate significant and similar levels of underperformance in both word reading and listening comprehension. Thus, we expect that most students will be placed in latent subgroups based on the severity of their component skill deficits, rather than the presence of specific skill deficits. To the extent that groups are formed based on specific component skills deficits, we anticipate membership in these groups to be smaller.

Research Question 2

Research Question 2 (RQ2): Do these reading profiles vary in their reading comprehension performance?

We hypothesize performance on reading comprehension measures will be associated with the severity of groups’ component skill difficulties.

Research Question 3

Research Question 3 (RQ3): What related cognitive processes predict group membership?

Based on previous evaluations of the cognitive profiles of struggling readers in elementary and beyond (e.g., Cho et al., 2015; Miciak et al., 2014), we hypothesize that groups with specific component reading skill deficits will exhibit corresponding deficits in theoretically and empirically implicated cognitive processes. For example, groups formed based on specific deficits in word-level reading will demonstrate corresponding deficits in phonological processing whereas groups with specific listening comprehension deficits will demonstrate corresponding deficits in VK and to a lesser extent in corresponding executive control processes (e.g., WM, EF). Differences in severity will be marked by corresponding differences in severity across cognitive predictors.

Method

This secondary analysis used pre-intervention data from a multisite randomized control trial for students with severe reading difficulties in Grade 4 (Vaughn et al., 2016). Participants from 17 schools in the Southwestern United States—nine schools were from an urban school district and eight schools were from two near-urban school districts—were included in the study. These schools were selected to reflect the demographic diversity of the broader region. The mean enrollment for the participating schools was 697 students. Each school included a significant proportion of students who qualified for free lunch (M = 81.6%, range = 46.1%–98.4%).

Student Participants

The research team screened fourth-grade students at all participating schools using the Gates-MacGinitie Reading Comprehension subtest (GMRT-RC; MacGinitie et al., 2000) unless students were enrolled in an alternative curriculum (i.e., life skills class) or identified as having a significant sensory disability that interfered with participating in the study. Of the students screened (n = 1,695), a total of 484 whose performance was at or below a standard score of 85 on the GMRT-RC met the study inclusion criterion. We were unable to include 38 students in this secondary analysis based on our preliminary analyses of missing data and outliers (see the subsection “Preliminary analysis” for further details). Thus, the final sample included 446 students. Table 1 presents the demographic information for the sample analyzed in the present study.

Table 1.

Participant Demographics.

Demographics Frequency Percent
Gender
 Female 200 44.84
 Male 246 55.16
Limited English proficiency
 No 214 47.98
 Yes 229 51.35
 Missing  3   0.67
Special education status
 No 186 41.70
 Yes   57 12.78
 Missing 203 58.52
Ethnicity
 Hispanic 129 28.92
 African American 103 23.09
 White   34   7.62
 Two or more races 175 39.24
 Others  3   0.67
 Missing  2   0.45
Free/reduced lunch
 No   22   5.25
 Free/reduced 271 64.68
 Missing 126 30.07

Note. Missing data refer to data that we were unable to access from participating school districts.

Measures

At each site, senior research staff provided extensive test administration training to test administrators hired by the research team, who were blind to student condition. After training occurred, senior assessment staff established reliability with test administrators by observing each administrator implement all measures. The administration of student assessments took place during fall (September–October) of the school year. Extensive information about the testing procedures and measures (including psychometric information) can be found at www.texasldcenter.org/projects/measures. All of the measures utilized in the present study are frequently used, standardized measures. We briefly describe them below.

Measures used to identify latent profiles of reading.

In line with previous research examining the SVR (e.g., Gough & Tunmer, 1986), we used a measure of word recognition and listening comprehension to identify latent profiles of reading. Specifically, the Woodcock–Johnson III (WJ-III) Letter-Word Identification (LWID) and Oral Comprehension (OC) subtests were used as indicators of decoding and listening comprehension, respectively (Woodcock et al., 2001). The LWID subtest assesses students’ untimed letter and real word reading to assess participant’s word decoding skill. The LWID subtest has a median reliability of .93 for ages 9 to 11 years. The OC subtest is an individually administered, standardized measure of a student’s ability to understand oral passages. Specifically, after a passage is read aloud, students are required to provide the missing word to the end of a sentence. The OC subtest has a median test–retest reliability of .78 for ages 9 to 11 years.

Reading comprehension measures.

Reading comprehension was assessed using three commonly used tests that vary in the way they assess reading comprehension: with the WJ-III Passage Comprehension subtest (Woodcock et al., 2001), the GMRT–fourth edition (MacGinitie et al., 2000), and the Test of Silent Reading Efficiency and Comprehension (TOS-REC; Wagner et al., 2010). The WJ-III Passage Comprehension is an untimed, individually administered test that requires students to read passages of varying length and respond by filling in the missing word (i.e., cloze task). The WJ-III Passage Comprehension subtest has a median test–retest reliability of .89 for ages 9 to 11 years. The GMRT is a timed, group-administered assessment consisting of expository and narrative passages ranging in length from 3 to 15 sentences. Students read each passage silently and answer multiple-choice questions. The GMRT has excellent stability and high internal consistency (the K-R 20 coefficient in Grade 4 = .93). Finally, the TOSREC is a brief (3 min) assessment that requires participants to read a list of sentences silently and assess the veracity of each statement by circling “yes” or “no.”

Cognitive measures.

Several cognitive attributes related to reading were assessed: phonological awareness (PA), rapid automatized naming (RAN), VK, WM, EF, and nonverbal reasoning. The Comprehensive Test of Phonological Processing (CTOPP; Wagner et al., 1999) Phoneme Elision subtest was used to measure students’ PA abilities. The Phoneme Elision subtest is a 20-item subtest that measures students’ capacity to segment words into smaller words by asking students to hear a word and then say the word with one sound removed from the original word. Internal consistency coefficients range from .91 to .95 across subgroups, and test–retest reliability at this age range is .79. RAN was assessed using the CTOPP Rapid Letter Naming subtest, which measures the speed with which an examinee can name letters on two pages (Wagner et al., 1999). Internal consistency coefficients range from .84 to .97 across subgroups and test–retest reliability at this age range is .72. The Kaufman Brief Intelligence Test–Second Edition (KBIT-2; Kaufman, 2004) VK subtest was used to assess students’ receptive vocabulary and general knowledge. Students are asked to match a stimulus picture that relates to word or world knowledge (e.g., nature, art, science) with a word or phrase spoken by the examiner. The adjusted test–retest reliability coefficient for the VK subtest ranges from .88 for ages 4 to 12 years. The Working Memory Test Battery for Children’s (WMTB-C) Listening Recall task was used to assess WM. During the task, students are presented with a short sentence and asked to determine the veracity of the sentence by saying “true” or “false.” The sentences are presented in a series ranging from one to six sentences at a time. After the series of sentences are presented, the subject is asked to recall the last word from each presented sentence. The WJ-III Planning subtest was used as an indicator of EF. To assess executive control and forethought, the task calls for subjects to trace a complex path with overlapping segments without lifting the pencil, skipping part of the path, or retracing any part of the path. Test–retest reliabilities for ages 9 to 11 years range from .75 to .78. Finally, the KBIT-2 Matrices Subtest (Kaufman, 2004) was used to assess nonverbal reasoning. The Matrices test requires students to select the picture among five or six choices that best fits with the stimulus diagrams or completes an analogy. The adjusted test–retest reliability coefficient for the VK subtest is above .76 for ages 4 to 12 years.

Data Analytic Strategy

Preliminary analysis.

Prior to conducting LPA, we screened data for missingness and outliers. We identified two severe univariate outliers based on Tukey (1977) method (i.e., three inter-quantiles below the 0.25 percentile or above the 0.75 percentile), one having an extremely high score on WJ-III OC and another having an extremely low score on WJ-III LWID. These two cases were also identified as bivariate outliers based on the visual inspection of the scatter plot. To reduce the influence of outliers on characteristics of the distribution and to promote the efficient estimation of latent classes, we removed the outliers from the data set. As a check, we conducted a follow-up sensitivity test to evaluate the effect, if any, of dropping the two cases. Including the outliers resulted in model non-identification due to a nonpositive definite first-order derivative product matrix. Thus, we excluded the outliers from the final analyses.

We examined missing data; missingness was due largely to the planned missing-data design used to collect data on cognitive variables in the larger study from which data for the present study were drawn (Vaughn et al., 2016). Missing data represented 5.52% of the data. One case was removed because the child did not complete a majority of the assessment battery. The pattern of missingness of the cognitive variables was completely at random, Little’s Missing Completely At Random (MCAR) test, χ2(32) = 33.68, p = .39. We used multiple imputation to generate 10 sets of imputed data. We aggregated across imputed data sets to compute estimates for missing values, and we used a robust maximum likelihood estimator in Mplus v.7.4 for LPA (Muthén & Muthén, 1998–2015). Descriptive statistics and bivariate correlations derived from the imputed data sets are presented in Table 2.

Table 2.

Correlations and Descriptive Statistics (N = 446).

Constructs WR LC PA RAN VK WM EF NR RF/C Cloze MCRC
Word Reading (WR)
Listening Comprehension (LC) .06
Phonological Awareness (PA) .44 .19
Rapid Automatized Naming (RAN) .28 .25 .26
Verbal Knowledge (VK) .13 .61 .21 .23
Working Memory (WM) .18 .45 .23 .18 .36
Executive Functioning (EF) .19 .26 .31 .28 .27 .21
Nonverbal Reasoning (NR) .17 .09 .32 .08 .22 .09 .31
Reading Fluency/Comprehension (RF/C) .40 .33 .23 .27 .28 .26 .11 .05
Cloze-Reading Comprehension (Cloze) .57 .46 .42 .38 .45 .32 .26 .22 .51
Multiple-Choice Reading Comprehension (MCRC) .25 .17 .14 .20 .18 .12 .04 −.01 .33 .24
M 89.89 83.56 7.39 7.85 80.63 87.82 100.15 95.95 69.10 81.93 77.15
SD 10.64 14.96 2.70 2.15 15.87 14.54 8.03 14.49 11.14 8.77 6.0

Note. Correlation coefficients greater than .1 are significant at alpha = .05.

LPA.

LPA is an empirically driven approach for identifying the ideal number of subgroups with distinct profiles using continuous indicators. LPA identifies subgroups by maximizing homogeneity within each class and maximizing heterogeneity between subgroups. These subgroups are considered latent because the membership is not directly observed but determined by examining the patterns of means and interrelations among various indicators. We used WJ-III LWID (word reading) and WJ-III OC (listening comprehension) to identify, or “index,” the component skills underlying different profiles of reading comprehension among struggling readers.

We conducted LPA with standard scores, and variances were estimated with between-class equality constraints. To determine the number of profiles, we considered the Bayesian information criterion (BIC) as a key statistical indicator for the selection of the profiles (Nylund et al., 2007). The profile solution with smaller BIC values indicating better model fit. In addition, we evaluated the models based on the Bootstrap Likelihood Ratio Test (BLRT) and Lo–Mendell–Rubin Adjusted Likelihood Ratio Test (LMRT). BLRT and LMRT are used to compare the improvement in model fit between the k − 1 and k class models and provides a p value associated with the differences in the model fit. A p value less than .05 indicate k class model provides significantly superior fit than k − 1 class model. We also considered theoretical interpretability of the profile solution as well as entropy values. Higher entropy values indicate more accurate classification into the profiles, but this was not used as a critical indicator for profile solutions (Lubke & Muthén, 2007). Because BLRT and LMRT are not available with multiple imputation and index variables did not have missing cases, LPA was conducted with the original sample. We performed one-way analysis of variance (ANOVA) to conduct pairwise comparisons of classification variables between two pairs of groups.

Reading comprehension performance and cognitive predictors of profile membership.

Once the optimal number of profiles was identified and cases were sorted by profile, we tested for differences in the reading comprehension performance across the profiles. We used Bolck–Croon–Hagenaars (BCH; Bakk & Vermunt, 2016) method with auxiliary variables in Mplus v.7.4 to account for measurement errors in the latent profiles. BCH employs a weighted ANOVA, where the weights are the inverse of classification error probabilities in the LPA model (Asparouhov & Muthén, 2014). Differences in profile-specific means on outcome variables were tested using Wald-χ2 tests.

We also examined the influence of student-level cognitive variables on profile membership. We conducted multinomial logistic regression analysis to test the relations between cognitive predictors (PA, rapid naming, vocabulary, WM, nonverbal reasoning, and EF) and the likelihood of dummy-coded profile membership (Vermunt, 2010). This was done using the R3STEP command in Mplus v.7.4, which restricts the influence of covariates on the profile solution (Asparouhov & Muthén, 2014). Each cognitive predictor was entered separately in the model due to small class count in the two profiles. Because R3STEP uses list-wise deletion to handle missing data in the auxiliary variables, we used multiple imputation to handle missingness in the cognitive predictors.

Results

LPA

We selected the three-profile solution because it had the lowest BIC (see Table 3) and because the LMRT and BLRT between the two- and three-class models were statistically significant from 0, whereas the values (LMRT and BLRT) for comparisons of the three- and four-class solutions were nonsignificant. The three-profile solution was associated with high entropy (.91), and the profile pattern was the most theoretically tenable. Estimated means of classification variables and reading comprehension measures for each profile are presented in Table 4.

Table 3.

Model Comparison.

Number of profiles BIC sBIC AIC LMRT (p) BLRT (p) Class count Entropy
2 profiles 7,053.68 7,031.47 7,024.98 41.21 (<.01) 43.46 (<.01) 18, 428 .93
3 profiles 7,043.14 7,011.40 7,002.14 27.35 (<.01) 28.85 (<.01) 17, 408, 21 .91
4 profiles 7,051.36 7,010.11 6,998.06 9.56 (.42) 10.08 (.07) 388, 22, 17, 19 .85
5 profiles 7,061.03 7,010.25 6,995.42 8.19 (.37) 8.64 (.13) 338, 11, 27, 25, 45 .74

Note. BIC = Bayesian information criterion; AIC = Akaike information criterion; LMRT = Lo-Mendell-Rubin Adjusted Likelihood Ratio Test; BLRT = Bootstrap Likelihood Ratio Test.

Table 4.

Estimated Means of Index Variables and Reading Comprehension Outcome Variables (N = 446).

Moderate RLD
Severe WR difficulties
Severe LC difficulties
Index variables and reading comprehension M SE M SE M SE
Index variables
 Word Reading (WJ-III LWID) 91.82 0.46 66.61 2.00 79.04 3.15
 Listening Comprehension (WJ-III OC) 85.04 0.73 88.45 2.74 46.84 3.89
Reading comprehension
 Reading Fluency/Comprehension (TOSREC) 70.48 0.57 54.80 1.09 57.48 1.60
 Cloze-Reading Comprehension (WJ-III PC) 83.91 0.38 63.69 2.00 64.14 2.54
 Multiple-Choice Reading Comprehension (GMRT-4 RC) 77.69 0.31 72.93 1.48 71.29 1.54

Note. RLD = Reading/Language Difficulties; WR = Word Reading; LC = Listening Comprehension; SE = standard error; WJ-III = Woodcock–Johnson III; LWID = Letter-Word Identification; OC = Oral Comprehension; TOSREC = Test of Silent Reading Efficiency and Comprehension; PC = Passage Comprehension; GMRT-4 RC = Gates-MacGinitie Reading Comprehension Subtest–Fourth Edition.

We compared mean scores on classification variables across the profiles, considered the relative strengths and weaknesses within each profile, and examined how severe the component skill deficits were compared with the national normative sample to create interpretative labels for each reading profile (see Figure 1). The first profile (n = 408, 90.2%) was the largest profile and included students who seem to have Moderate Reading/Language Difficulties (RLD). They scored higher than the other groups on word reading (M = 91.82) and showed low listening comprehension performance (M = 85.04). They were in the low average range (30th percentile and 16th percentile, respectively) on both outcomes. We labeled the second profile (n = 21, 5.4%) Severe Word Reading (WR) Difficulties because these students showed the lowest word reading (M = 66.61) but the highest level of listening comprehension (M = 88.45). Pairwise comparisons indicate this profile had significantly lower word reading, F(1, 423) = 221.07, p < .01, but a similar level of listening comprehension, F(1, 423) = 1.81, p = .18, compared with Moderate RLD profile. Whereas this profile had low average listening comprehension scores (21st percentile), they had severe word reading difficulties (1st percentile) in comparison with the national norm. The third profile (n = 17, 4.4%) included students with Severe Listening Comprehension (LC) Difficulties because its members demonstrated low word reading performance (M = 79.04), but their listening comprehension performance was particularly low (M = 46.84). Students with Severe LC Difficulties had significantly lower word reading, F(1, 423) = 39.29, p < .01, and listening comprehension, F(1, 423) = 180.49, p < .01, compared with students in the Moderate RLD profile. Compared with Severe WR profile, they had relatively higher word reading, F(1, 423) = 39.61, p < .01, but profoundly lower listening comprehension, F(1, 423) = 181.51, p < .01. The Severe LC group had low average word reading (10th percentile) but severely low listening comprehension (<.01 percentile) compared with the national norm. The standard scores and percentiles should be interpreted with caution due to potential unreliability at the tail end of the distribution.

Figure 1.

Figure 1.

Group sizes and component reading skill standard score means for the latent profiles.

Note. This figure demonstrates average level of word reading and listening comprehension scores for each profile.

Given the large proportion of English learners (ELs) in the sample (51%), we examined whether EL status was associated with group membership. Results indicated that ELs were more likely to be in the Severe LC group than the Moderate RLD (b = 1.769, standard error [SE] = .885, p = .046) or Severe WR group (b = 2.877, SE = 1.08, p = .01). Although it is noteworthy that ELs were overrepresented in the Severe LC profile, 94% of ELs in the sample were classified in another latent profile (i.e., Severe WR or Moderate RLD) and the largest profile—Moderate RLD (91% of total sample)—composed nearly equal numbers of ELs and non-ELs (49% ELs and 51% non-ELs). This suggests the latent profiles primarily reflect heterogeneity in the component skills of reading rather than EL status.

Reading Comprehension Performance and Cognitive Predictors of Profile Membership

Students with Moderate RLD outperformed members of the two other groups on all tests of reading comprehension, χ2s(2) > 16, ps < .01. The Severe LC Difficulties profile and the Severe WR Difficulties profile did not differ from each other on reading comprehension, .02 < χ2s(2) < 1.90, .16 < ps < .89.

Finally, logistic regressions (Table 5) indicated that students with lower PA, RAN, VK, WM, and EF were more likely to be in Severe LC Difficulties profile compared with Moderate RLD profile. Students with lower PA, RAN, VK, and NR were more likely to be in Severe WR Difficulties profile compared with Moderate RLD profile. Compared with Severe WR Difficulties profile, students were more likely to be in Severe LC Difficulties profile when they have lower VK, WM, and EF.

Table 5.

Association Between Cognitive Predictors and Profile Membership.

Moderate RLD vs.
Severe LC Difficulties vs.
Severe LC Difficulties
Severe WR Difficulties
Severe WR Difficulties
Cognitive predictors Coefficient SE p Coefficient SE p Coefficient SE p
Phonological Awareness (PA) −0.79 0.22 <.01 −0.59 0.23   .01   0.21 0.23   .36
Rapid Automatized Naming (RAN) −0.73 0.19 <.01 −0.38 0.15   .01   0.35 0.20   .08
Verbal Knowledge (VK) −0.15 0.02 <.01 −0.01 0.02 <.01   0.15 0.03 <.01
Working Memory (WM) −0.11 0.03 <.01   0.01 0.03   .81   0.12 0.04 <.01
Executive Functioning (EF) −0.19 0.06 <.01 −0.05 0.03   .12   0.14 0.06   .02
Nonverbal Reasoning (NR) −0.04 0.02 0.10 −0.04 0.02   .01 −0.01 0.03   .80

Note. RLD = Reading/Language Difficulties; WR = Word Reading; LC = Listening Comprehension; SE = standard error.

Discussion

It is well recognized that a significant number of students in Grades 4 and beyond are unable to read and understand text at a basic level (National Assessment of Educational Progress, 2019). Relative to studies conducted with students in the primary grades (e.g., Wanzek et al., 2016, 2018), few reading interventions have demonstrated robust effects on standardized measures of reading comprehension for students in the upper elementary and secondary grades (Scammacca et al., 2015; Wanzek et al., 2010). Previous research has aimed to inform the development of effective interventions for struggling readers by investigating individual differences in the component skills of reading comprehension. This study sought to address gaps in the extant literature by investigating the word reading and listening comprehension difficulties among fourth-grade students with significant reading comprehension deficits and the cognitive difficulties that underlie these weaknesses. In doing so, we addressed three related questions: (a) Do reading profiles emerge based on the severity and specificity of the component skills of reading?, (b) To what extent do these reading profiles vary in their reading comprehension performance?, and (c) What related cognitive processes predict group membership?

Profiles of Struggling Readers

In addressing RQ1, three distinct reading profiles emerged: Moderate RLD, Severe WR difficulties, and Severe LC difficulties. As hypothesized, latent reading profiles reflected both the severity and specificity of reading skill deficits; however, less than 10% of students were classified in subgroups characterized based on the specificity of component reading skill deficits (e.g., Severe WR or LC difficulties). More than 90% of students demonstrated commensurate difficulties in both word reading and listening comprehension. The profiles identified in this study (Moderate RLD, Severe WR, and Severe LC) relate to the classification system based on the SVR put forth by Catts and colleagues (2006), which identified three subgroups of struggling readers: specific word reading deficits, specific comprehension deficits, and mixed deficits. However, our results advance our understanding of upper elementary students with significant reading difficulties in at least two ways. First, the results of the LPA provide estimates of the prevalence of each subgroup. The vast majority of students in this study were placed in a reader profile characterized by multiple deficits in both word reading and linguistic comprehension. Second, whereas Catts and colleagues identified subgroups of eighth-grade struggling readers with below-average performance in one component skill and normal or near-normal performance in the other (e.g., students with specific word reading deficits showed poor performance in word reading but normal or near-normal performance in listening comprehension), our analyses did not identify latent profiles with normal performance in word reading or listening comprehension. Of note, whereas our study included students who scored at least 1 standard deviation (SD) below the mean in reading comprehension, Catts and colleagues identified subgroups from a sample of eighth-grade students who all showed language impairments in kindergarten but many of whom had developed average or above-average word recognition and reading comprehension skills.

These findings generally align with previous latent class analyses conducted with struggling readers in Grades 6 to 9, which found a majority of students struggled in both decoding and linguistic comprehension (Brasseur-Hock et al., 2011; Clemens et al., 2017; Lesaux & Kieffer, 2010). What may distinguish these findings from those of previous studies is the high prevalence of students (91%) who demonstrated relatively similar weakness on both SVR components. For reference, in studying middle school students, Clemens et al. (2017) and Brasseur-Hock et al. (2011) found a majority of students demonstrated underachievement on both component skills; however, the prevalence rates of students with multiple deficits were lower (57% and 51%, respectively). The finding that no latent profiles emerged with average word reading or listening comprehension performance also distinguishes these study results from the previous middle school studies. In the previous middle school studies, subgroups of students emerged with average or above-average performance in one of the component skills. Clemens et al. (2017) found, for example, that 43% of students demonstrated average performance in either reading fluency or vocabulary. Correspondingly, Brasseur-Hock et al. (2011) and Lesaux and Kieffer (2010) found 49% and 82%, respectively, showed average or above-average levels of performance on at least one component skill of reading comprehension.

Understanding the characteristics of the present study sample and how it varies from the previous studies may help explain why a very high proportion of students demonstrated similar underperformance in the component skills of reading comprehension and no profiles emerged with average or above-average performance in the component skills of decoding and listening comprehension. First, the sample for the present study demonstrated significant deficits in reading comprehension. Across comprehension measures, mean standard scores for this sample range from 69.10 to 81.93, indicating that performance, on average, was more than 1.5 SD below normative expectations. Students in previous samples scored closer to 0.5 (Lesaux & Kieffer, 2010) or 1 SD (Brasseur-Hock et al., 2011; Clemens et al., 2017) below the mean in reading comprehension. The sample included in this study could be characterized as demonstrating significant risk for reading disabilities or dyslexia.

A second difference may be related to differences in grade level between this study and previous studies conducted with students in Grades 6 to 9. Fourth-grade students may be more likely to have difficulties in the foundational reading skills than middle school students, as secondary students have experienced more opportunities to solidify reading skills due to their additional years of schooling and potentially extra time within school-based reading interventions. In addition, research indicates that word reading is more strongly associated with reading comprehension in younger grades than later grades (e.g., Francis et al., 2005), as texts in the elementary grades are less complex and rely less heavily on linguistic processes related to reading for understanding (e.g., inference-making and syntactic processing). Finally, given previous research indicating that English learners’ reading difficulties are often primarily due to underdeveloped language skills (e.g., Aarts & Verhoeven, 1999; Lesaux et al., 2006), these findings should be interpreted in light of the high proportion (51%) of ELs in the current sample. The proportion of ELs in this study was most similar to the Lesaux and Kieffer’s (2010) study which included a very high proportion of ELs (77%) and also found that all reader profiles were marked by underdeveloped linguistic comprehension (as measured by vocabulary). However, as mentioned above, the Lesaux and Keiffer sample consisted of older students who were about 1 SD higher in reading comprehension than the students in this study, which may help explain why the reader profiles that emerged in this study showed consistently greater deficits in word reading. The heterogeneity in the study samples and findings underscores the need for further investigation of the reading profiles of students with significant reading difficulties, including ELs and monolingual students.

Reading Comprehension Performance and Cognitive Predictors of Profile Membership

In addressing RQ2, we evaluated mean differences in reading comprehension among the latent profiles. Results indicated that students with Moderate RLD outperformed students with more significant difficulties in word reading or listening comprehension. No differences were present between the Severe WR and LC profiles. These findings corresponded with our hypothesis that reading comprehension performance would be associated with the severity of component skill difficulties.

To address our final research question, we conducted regression analyses examining the associations between cognitive predictors and profile membership. Based on previous research (e.g., Cho et al., 2015; Miciak et al., 2014), we hypothesized that profiles with specific component deficits would exhibit corresponding deficits in theoretically and empirically implicated cognitive processes. The results supported this hypothesis with a few exceptions. For instance, relative to students in the Moderate RLD profile, students in the Severe WR profile performed lower on cognitive predictors commonly associated with word reading deficits (PA, RAN); however, these students also performed lower on predictors not frequently associated with a deficit in this area (VK and nonverbal reasoning). Given that students in the Moderate RLD and Severe WR profiles scored similarly on listening comprehension, it is surprising that students in the Severe WR profile scored lower on VK and nonverbal reasoning, which are typically associated with listening comprehension (Ouellette, 2006; Tannenbaum et al., 2006). Of note, although the differences met the critical alpha level for significance, the coefficients for vocabulary knowledge (−0.01) and nonverbal reasoning (−0.04) were very small in relation to the coefficients for PA (−0.59) and RAN (−0.38).

Contrasts between the Severe LC and WR profiles revealed students in the Severe LC profile performed significantly lower on VK, WM, and EF, as hypothesized. These results are consistent with previous research that suggests VK (i.e., vocabulary; Lesaux et al., 2006) and the cognitive processes of WM (e.g., Lesaux & Kieffer, 2010) and EF (e.g., Cutting et al., 2009) are more strongly associated with more complex processes such as linguistic comprehension and less related to less complex processes like decoding. The differences in the underlying cognitive performance between students with different components formed on reading profiles provides further empirical support for the role of these cognitive processes in reading development and may help identify potential targets for intervention and/or dimensions for improving prediction of reading risk.

Implications for Research and Practice

These results suggest the most prevalent profile among students with substantial reading comprehension deficits in Grade 4 is difficulties in both word reading and linguistic comprehension. Students in this profile scored, on average, below the 30th percentile in both domains, with slightly lower performance in listening comprehension than word reading. The finding that a high number of students demonstrate both word reading and linguistic comprehension difficulties corresponds with other recent research that underscores the importance of addressing word reading difficulties for students in Grades 4 to 12 (e.g., Cirino et al., 2013; Hock et al., 2009). These findings contrast with prominent reports on adolescent readers and instructional approaches designed to address the reading difficulties of adolescents. For instance, the influential Reading Next report states, “Very few of these older struggling readers need help to read the words on a page; their most common problem is that they are not able to comprehend what they read” (Biancarosa & Snow, 2006, p. 11). The findings from this study and the aforementioned research conducted with struggling readers beyond Grade 3—particularly students with significant reading difficulties such as dyslexia—calls into question the conclusion that students’ reading comprehension problems are seldom related to word reading.

This notion that few readers after the primary grades have difficulty in word reading may be contributing to the development of reading interventions that inadequately address word reading. In a meta-analysis of the effects of reading interventions for struggling readers in Grades 4 to 12, Scammacca et al. (2015) found that 44% of interventions targeted only reading comprehension or vocabulary and omitted instruction addressing deficits in word reading or reading fluency. Further underscoring the importance of enhancing word reading and linguistic comprehension, Scammacca and colleagues identified multicomponent reading interventions—nearly all of which included instruction addressing reading comprehension and word reading or fluency—were the most effective type of intervention approach for Grades 4 to 12. Scammacca et al. (2015) interpreted these findings as providing support for interventions that included support at both the word and text level.

Future Research

One avenue for future research related to upper elementary students with dyslexia may be to examine the effects of multicomponent interventions that include varying levels of word reading and fluency instruction. The percentage of time allocated to each component of instruction in a multicomponent intervention was addressed in a study conducted by Wanzek and colleagues (2017). The authors reported that the standard implementation of the Voyager Passport program led to 18% of time allocated to text reading and fluency, 12% to decoding, and 5% to spelling. Instructional records showed nearly two thirds of instructional time was dedicated to vocabulary and oral language (21%) and reading comprehension (41%). It may be that these proportions represent the optimal amount of time spent on the component skills of reading because developing vocabulary, oral language, and comprehension proficiency are more time-intensive tasks. However, this is an empirical question that may be worth assessing in future studies.

Limitations

There are a few study limitations worthy of consideration. One consideration relates to the size of the sample and generalizability of the study findings. Although this study’s sample size exceeds those of other similar studies (e.g., Brasseur-Hock et al., 2011; Clemens et al., 2017; Lesaux & Kieffer, 2010) and the findings are generally consistent with those studies, the present findings warrant confidence only to the extent that these findings replicate with other samples of students with significant reading difficulties. Given the high proportion of English learners in the sample, these findings may generalize best to linguistically diverse populations of students with significant reading difficulties. We were also limited in the measures we could collect. We were unable to create latent constructs for word reading and listening comprehension due to time constraints in the amount of testing time. We recognize that additional measures of the component skills, as well as of contextual factors (e.g., exposure to evidence-based instruction), may help us better understand the profiles of students with significant reading difficulties.

Conclusion

Our findings demonstrate that upper elementary and secondary students with significant reading difficulties such as dyslexia frequently demonstrate significant deficits in both decoding and linguistic comprehension (Catts et al., 2006; Clemens et al., 2017; Hock et al., 2009). Although average and above-average readers may shift from learning to read to reading to learn by the upper elementary grades (Chall, 1983), results indicate that fourth graders with well below-average reading comprehension skills present deficits in word reading that will likely require remediation. This is not to discount the importance of developing vocabulary, general knowledge, inference-making, and other linguistic processes that facilitate reading for understanding. We interpret our results as highlighting the need for multicomponent intervention approaches that target linguistic comprehension as well as word reading.

Funding

The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by grant P50 HD052117-07 from the Eunice Kennedy Shriver National Institute of Child Health and Human Development at the National Institutes of Health and by grant R324B170012 from the National Center for Special Education Research at the Institute of Education Sciences. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institutes of Health, or the Institute of Education Sciences.

Footnotes

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

References

  1. Aaron PG, Joshi RM, Gooden R, & Bentum KE (2008). Diagnosis and treatment of reading disabilities based on the component model of reading: An alternative to the discrepancy model of LD. Journal of Learning Disabilities, 41(1), 67–84. 10.1177/0022219407310838 [DOI] [PubMed] [Google Scholar]
  2. Aarts R, & Verhoeven L (1999). Literacy attainment in a second language submersion context. Applied Psycholinguistics, 20, 377–393. 10.1017/S0142716499003033 [DOI] [Google Scholar]
  3. Asparouhov T, & Muthén B (2014). Auxiliary variables in mixture modeling: Using the BCH method in Mplus to estimate a distal outcome model and an arbitrary secondary model. Mplus Web Notes, 21(2), 1–22. 10.1080/10705511.2014.915181 [DOI] [Google Scholar]
  4. Bakk Z, & Vermunt JK (2016). Robustness of stepwise latent class modeling with continuous distal outcomes. Structural Equation Modeling: A Multidisciplinary Journal, 23(1), 20–31. 10.1080/10705511.2014.955104 [DOI] [Google Scholar]
  5. Biancarosa C, & Snow CE (2006). Reading next—A vision for action and research in middle and high school literacy: A report to Carnegie Corporation of New York (2nd ed.). Alliance for Excellent Education. [Google Scholar]
  6. Brasseur-Hock IF, Hock MF, Kieffer MJ, Biancarosa G, & Deshler DD (2011). Adolescent struggling readers in urban schools: Results of a latent class analysis. Learning and Individual Differences, 21(4), 438–452. 10.1016/j.lindif.2011.01.008 [DOI] [Google Scholar]
  7. Burns MK, Davidson K, Zaslofsky AF, Parker DC, & Maki KE (2018). The relationship between acquisition rate for words and working memory, short-term memory, and reading skills: Aptitude-by-treatment or skill-by-treatment interaction? Assessment for Effective Intervention, 43(3), 182–192. 10.1177/1534508417730822 [DOI] [Google Scholar]
  8. Burns MK, Petersen-Brown S, Haegele K, Rodriguez M, Schmitt B, Cooper M, …VanDerHeyden AM (2016). Meta-analysis of academic interventions derived from neuropsychological data. School Psychology Quarterly, 31(1), 28. 10.1037/spq0000117 [DOI] [PubMed] [Google Scholar]
  9. Catts HW, Adlof SM, & Weismer SE (2006). Language deficits in poor comprehenders: A case for the simple view of reading. Journal of Speech, Language, and Hearing Research, 49(2), 278–293. 10.1044/1092-4388(2006/023) [DOI] [PubMed] [Google Scholar]
  10. Catts HW, Herrera S, Nielsen DC, & Bridges MS (2015). Early prediction of reading comprehension within the simple view framework. Reading and Writing, 28(9), 1407–1425. 10.1007/s11145-015-9576-x [DOI] [Google Scholar]
  11. Catts HW, Hogan TP, & Adlof SM (2005). Developmental changes in reading and reading disabilities. In The connections between language and reading disabilities (pp. 38–51). Psychology Press. 10.4324/9781410612052 [DOI] [Google Scholar]
  12. Chall JS (1983). Stages of reading development. Mcgraw-Hill. [Google Scholar]
  13. Cho E, Capin P, Roberts G, Roberts GJ, & Vaughn S (2019). Examining sources and mechanisms of reading comprehension difficulties: Comparing English learners and non-English learners within the simple view of reading. Journal of Educational Psychology, 111(6), 982. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Cho E, Roberts GJ, Capin P, Roberts G, Miciak J, & Vaughn S (2015). Cognitive Attributes, Attention, and Self Efficacy of Adequate and Inadequate Responders in a Fourth Grade Reading Intervention. Learning Disabilities Research & Practice, 30(4), 159–170. 10.1111/ldrp.12088 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Cirino PT, Romain MA, Barth AE, Tolar TD, Fletcher JM, & Vaughn S (2013). Reading skill components and impairments in middle school struggling readers. Reading and Writing, 26(7), 1059–1086. 10.1007/s11145-012-9406-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Clemens NH, Simmons D, Simmons LE, Wang H, & Kwok OM (2017). The prevalence of reading fluency and vocabulary difficulties among adolescents struggling with reading comprehension. Journal of Psychoeducational Assessment, 35(8), 785–798. 10.1177/0734282916662120 [DOI] [Google Scholar]
  17. Connor CM, Morrison FJ, & Petrella JN (2004). Effective reading comprehension instruction: Examining child by instruction interactions. Journal of Educational Psychology, 96, 682–698. 10.1037/0022-0663.96.4.682 [DOI] [Google Scholar]
  18. Cromley JG, & Azevedo R (2007). Testing and refining the direct and inferential mediation model of reading comprehension. Journal of Educational Psychology, 99(2), 311. 10.1037/0022-0663.99.2.311 [DOI] [Google Scholar]
  19. Cronbach LJ, & Snow RE (1977). Aptitudes and instructional methods: A handbook for research on interactions. Irvington. [Google Scholar]
  20. Cutting LE, Materek A, Cole CA, Levine TM, & Mahone EM (2009). Effects of fluency, oral language, and executive function on reading comprehension performance. Annals of Dyslexia, 59(1), 34–54. 10.1007/s11881-009-0022-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Fletcher JM, Lyon GR, Fuchs LS, & Bames MA (2018). Learning disabilities: From identification to intervention. Guilford Publications. [Google Scholar]
  22. Fletcher JM, Stuebing KK, Barth AE, Denton CA, Cirino PT, Francis DJ, & Vaughn S (2011). Cognitive correlates of inadequate response to reading intervention. School Psychology Review, 40(1), 3–22. [PMC free article] [PubMed] [Google Scholar]
  23. Florit E, & Cain K (2011). The simple view of reading: Is it valid for different types of alphabetic orthographies? Educational Psychology Review, 23, 553–576. 10.1007/s10648-011-9175-6 [DOI] [Google Scholar]
  24. Francis DJ, Fletcher JM, Catts HW, & Tomblin JB (2005). Dimensions affecting the assessment of reading comprehension. In Paris SG & Stahl SA (Eds.), Children’s reading comprehension and assessment (pp. 369–394). Lawrence Erlbaum Associates. [Google Scholar]
  25. Fuchs D, Hendricks E, Walsh ME, Fuchs LS, Gilbert JK, Zhang Tracy W, …Peng P (2018). Evaluating a multidimensional reading comprehension program and reconsidering the lowly reputation of tests of near-transfer. Learning Disabilities Research & Practice, 33(1), 11–23. 10.1111/ldrp.12162 [DOI] [Google Scholar]
  26. Gough PB, & Tunmer WE (1986). Decoding, reading, and reading disability. Remedial and Special Education, 7(1), 6–10. 10.1177/074193258600700104 [DOI] [Google Scholar]
  27. Hock MF, Brasseur IF, Deshler DD, Catts HW, Marquis JG, Mark CA, & Stribling JW (2009). What is the reading component skill profile of adolescent struggling readers in urban schools? Learning Disability Quarterly, 32(1), 21–38. 10.2307/25474660 [DOI] [Google Scholar]
  28. Hoover WA, & Gough PB (1990). The simple view of reading. Reading and Writing, 2(2), 127–160. 10.1007/BF00401799 [DOI] [Google Scholar]
  29. Johnson ES, Humphrey M, Mellard DF, Woods K, & Swanson HL (2010). Cognitive processing deficits and students with specific learning disabilities: A selective meta-analysis of the literature. Learning Disability Quarterly, 33(1), 3–18. 10.1177/073194871003300101 [DOI] [Google Scholar]
  30. Kaufman AS (2004). Kaufman brief intelligence test (2nd ed.). AGS Publishing. [Google Scholar]
  31. Keenan JM, & Meenan CE (2014). Test differences in diagnosing reading comprehension deficits. Journal of Learning Disabilities, 47(2), 125–135. 10.1177/0022219412439326 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Kendeou P, Van den Broek P, White MJ, & Lynch JS (2009). Predicting reading comprehension in early elementary school: The independent contributions of oral language and decoding skills. Journal of Educational Psychology, 101(4), 765. [Google Scholar]
  33. Kim YS (2011). Proximal and distal predictors of reading comprehension: Evidence from young Korean readers. Scientific Studies of Reading, 15(2), 167–190. 10.1080/10888431003653089 [DOI] [Google Scholar]
  34. Language and Reading Research Consortium, & Chiu YD (2018). The simple view of reading across development: Prediction of grade 3 reading comprehension from prekindergarten skills. Remedial and Special Education, 39(5), 289–303. 10.1177/0741932518762055 [DOI] [Google Scholar]
  35. Leach JM, Scarborough HS, & Rescorla L (2003). Late-emerging reading disabilities. Journal of Educational Psychology, 95(2), 211–224. 10.1037/0022-0663.95.2.211 [DOI] [Google Scholar]
  36. Lesaux NK, & Kieffer MJ (2010). Exploring sources of reading comprehension difficulties among language minority learners and their classmates in early adolescence. American Educational Research Journal, 47(3), 596–632. 10.3102/0002831209355469 [DOI] [Google Scholar]
  37. Lesaux NK, Koda K, Siegel LS, & Shanahan T (2006). Development of literacy of language minority learners. In August DL & Shanahan T (Eds.), Developing literacy in a second language: Report of the national literacy panel (pp. 75–122). Lawrence Erlbaum. [Google Scholar]
  38. Lubke G, & Muthén BO (2007). Performance of factor mixture models as a function of model size, covariate effects, and class-specific parameters. Structural Equation Modeling, 14(1), 26–47. 10.1080/10705510709336735 [DOI] [Google Scholar]
  39. MacGinitie WH, MacGinitie RK, Maria K, Dreyer LG, & Hughes KE (2000). Gates-MacGinitie reading tests (4th Ed., GMRT-4). Riverside Publishing. [Google Scholar]
  40. McMaster KL, Van den Broek P, Espin CA, White MJ, Rapp DN, Kendeou P, … Carlson S (2012). Making the right connections: Differential effects of reading intervention for subgroups of comprehenders. Learning and Individual Differences, 22, 100–111. 10.1016/j.lindif.2011.11.017 [DOI] [Google Scholar]
  41. Melby-Lervåg M, Redick TS, & Hulme C (2016). Working memory training does not improve performance on measures of intelligence or other measures of “far transfer” evidence from a meta-analytic review. Perspectives on Psychological Science, 11(4), 512–534. 10.1177/1745691616635612 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Miciak J, Fletcher JM, & Stuebing KK (2016). Accuracy and validity of methods for identifying learning disabilities in a response-to-intervention service delivery framework. In Handbook of response to intervention (pp. 421–440). Springer. 10.1007/978-1-4899-7568-3_25 [DOI] [Google Scholar]
  43. Miciak J, Fletcher JM, Stuebing KK, Vaughn S, & Tolar TD (2014). Patterns of cognitive strengths and weaknesses: Identification rates, agreement, and validity for learning disabilities identification. School Psychology Quarterly, 29(1), 21–37. 10.1037/spq0000037 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Morris RD, & Fletcher JM (1988). Classification in neuropsychology: A theoretical framework and research paradigm. Journal of Clinical and Experimental Neuropsychology, 10(5), 640–658. 10.1080/01688638808402801 [DOI] [PubMed] [Google Scholar]
  45. Muthén LK, & Muthén BO (1998–2015). Mplus User’s guide. [Google Scholar]
  46. Nation K, & Snowling M (1997). Assessing reading difficulties: The validity and utility of current measures of reading skill. British Journal of Educational Psychology, 67(3), 359–370. 10.1111/j.2044-8279.1997.tb01250.x [DOI] [PubMed] [Google Scholar]
  47. National Assessment of Educational Progress. (2019). NAEP mathematics and reading assessments: Highlighted results at grades 4 and 8 for the nation, states, and districts. U.S. Department of Education. [Google Scholar]
  48. Nylund KL, Asparouhov T, & Muthén BO (2007). Deciding on the number of classes in latent class analysis and growth mixture modeling: A Monte Carlo simulation study. Structural Equation Modeling: A Multidisciplinary Journal, 14(4), 535–569. 10.1080/10705510701575396 [DOI] [Google Scholar]
  49. Ouellette GP (2006). What’s meaning got to do with it: The role of vocabulary in word reading and reading comprehension. Journal of Educational Psychology, 98, 554–566. 10.1037/0022-0663.98.3.554 [DOI] [Google Scholar]
  50. Perfetti CA (1999). Comprehending written language: A blue-print of the reader. The Neurocognition of Language, 167, 208. 10.1093/acprof:oso/9780198507932.003.0006 [DOI] [Google Scholar]
  51. Roch M, & Levorato MC (2009). Simple view of reading in Down’s syndrome: The role of listening comprehension and reading skills. International Journal of Language & Communication Disorders, 44(2), 206–223. 10.1080/13682820802012061 [DOI] [PubMed] [Google Scholar]
  52. Scammacca NK, Roberts G, Vaughn S, & Stuebing KK (2015). A meta-analysis of interventions for struggling readers in grades 4–12: 1980–2011. Journal of Learning Disabilities, 48(4), 369–390. 10.1177/0022219413504995 [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Stothard SE, & Hulme C (1995). A comparison of phonological skills in children with reading comprehension difficulties and children with decoding difficulties. Journal of Child Psychology and Psychiatry, 36(3), 399–408. 10.1111/j.1469-7610.1995.tb01298.x [DOI] [PubMed] [Google Scholar]
  54. Swanson HL (1999). Reading research for students with LD: A meta-analysis of intervention outcomes. Journal of Learning Disabilities, 32(6), 504–532. 10.1177/002221949903200605 [DOI] [PubMed] [Google Scholar]
  55. Szadokierski I, Burns MK, & McComas JJ (2017). Predicting intervention effectiveness from reading accuracy and rate measures through the instructional hierarchy: Evidence for a skill-by-treatment interaction. School Psychology Review, 46(2), 190–200. 10.17105/SPR-2017-0013.V46-2 [DOI] [Google Scholar]
  56. Tannenbaum KR, Torgesen JK, & Wagner RK (2006). Relationships between word knowledge and reading comprehension in third-grade children. Scientific Studies of Reading, 10(4), 381–398. 10.1207/s1532799xssr1004_3 [DOI] [Google Scholar]
  57. Tilstra J, McMaster K, Van den Broek P, Kendeou P, & Rapp D (2009). Simple but complex: Components of the simple view of reading across grade levels. Journal of Research in Reading, 32(4), 383–401. 10.1111/j.1467-9817.2009.01401.x [DOI] [Google Scholar]
  58. Torppa M, Tolvanen A, Poikkeus AM, Eklund K, Lerkkanen MK, Leskinen E, & Lyytinen H (2007). Reading development subtypes and their early characteristics. Annals of Dyslexia, 57(1), 3–32. 10.1007/s11881-007-0003-0 [DOI] [PubMed] [Google Scholar]
  59. Tukey JW (1977). Exploratory data analysis. Addison-Wesley. [Google Scholar]
  60. Vaughn S, Roberts G, Capin P, Miciak J, Cho E, & Fletcher JM (2019). How initial word reading and language skills affect reading comprehension outcomes for students with reading difficulties. Exceptional Children, 85(2), 180–196. 10.1177/0014402918782618 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Vaughn S, Solís M, Miciak J, Taylor WP, & Fletcher JM (2016). Effects from a randomized control trial comparing researcher and school-implemented treatments with fourth graders with significant reading difficulties. Journal of Research on Educational Effectiveness, 9(Suppl. 1), 23–44. 10.1080/19345747.2015.1126386 [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Vellutino FR, Fletcher JM, Snowling MJ, & Scanlon DM (2004). Specific reading disability (dyslexia): What have we learned in the past four decades? Journal of Child Psychology and Psychiatry, 45(1), 2–40. 10.1046/j.0021-9630.2003.00305.x [DOI] [PubMed] [Google Scholar]
  63. Vellutino FR, Scanlon DM, Small SG, Fanuele DP, & Sweeney J (2007). Preventing early reading difficulties through kindergarten and first grade intervention: A variant of the three-tier model. In Haager D, Klingner JK, & Vaughn S (Eds.), Evidence-based reading practices for response to intervention (pp. 185–219). Brookes. [Google Scholar]
  64. Vermunt JK (2010). Latent class modeling with covariates: Two improved three-step approaches. Political Analysis, 18(4), 450–469. 10.1093/pan/mpq025 [DOI] [Google Scholar]
  65. Wagner RK, Torgesen JK, Rashotte CA, & Pearson NA (1999). Comprehensive test of phonological processing: CTOPP. Pro-ed. [Google Scholar]
  66. Wagner RK, Torgesen JK, Rashotte CA, & Pearson NA (2010). Test of silent reading efficiency and comprehension. Pro-Ed. [Google Scholar]
  67. Wanzek J, Petscher Y, Otaiba SA, Rivas BK, Jones FG, Kent SC, … Mehta P (2017). Effects of a year long supplemental reading intervention for students with reading difficulties in fourth grade. Journal of Educational Psychology, 109(8), 1103. 10.1037/edu0000184 [DOI] [Google Scholar]
  68. Wanzek J, Stevens EA, Williams KJ, Scammacca N, Vaughn S, & Sargent K (2018). Current evidence on the effects of intensive early reading interventions. Journal of Learning Disabilities, 51(6), 612–624. 10.1177/0022219418775110 [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Wanzek J, Vaughn S, Scammacca N, Gatlin B, Walker MA, & Capin P (2016). Meta-analyses of the effects of tier 2 type reading interventions in grades K-3. Educational Psychology Review, 28(3), 551–576. 10.1007/s10648-015-9321-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Wanzek J, Wexler J, Vaughn S, & Ciullo S (2010). Reading interventions for struggling readers in the upper elementary grades: A synthesis of 20 years of research. Reading and Writing, 23(8), 889–912. 10.1007/s11145-009-9179-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Woodcock RW, Mather N, McGrew KS, & Wendling BJ (2001). Woodcock-Johnson III tests of cognitive abilities (pp. 371–401). Riverside Publishing Company. [Google Scholar]
  72. Yuill N, & Oakhill J (1991). Children’s problems in text comprehension: An experimental investigation. Cambridge University Press. [Google Scholar]

RESOURCES