Abstract
The simple view of reading (SVR) predicts that reading difficulties can result from decoding difficulties, language comprehension difficulties, or a combination of these difficulties. However, classification studies have identified a fourth group of children whose reading difficulties are unexplained by the model. This may be due to the type of classification model used. The current research included 209 children in Grades 3–5 (8–10 years of age) from New Zealand. Children were classified using the traditional approach and a cluster analysis. In contrast to the traditional classification model, the cluster analysis approach eliminated the unexplained reading difficulties group, suggesting that poor readers can be accurately assigned to one of three groups, which are consistent with those predicted by the SVR. The second set of analyses compared the three poor reader groups across 14 measures of reading comprehension, decoding, language comprehension, phonological awareness, and rapid naming. All three groups demonstrated reading comprehension difficulties, but the dyslexia group showed particular weaknesses in word processing and phonological areas, the SCD group showed problems deriving meaning from oral language, and the mixed group showed general deficits in most measures. The findings suggest that the SVR does have the potential to determine reading profiles and differential intervention methods.
Keywords: dyslexia, assessment, specific comprehension difficulty, simple view of reading
1. INTRODUCTION
Children who have reading difficulties are not a homogeneous group (Aaron et al., 1999; Catts et al., 2003; Ebert & Scott, 2016; Morris et al., 2017). Researchers have attempted to capture variation in reading difficulties through the use of classification approaches. One classification approach that has helped to explain the heterogeneity of difficulties exhibited by poor readers, and the variance shown by a typical population of readers, is based on the simple view of reading (SVR).
The SVR is one of the most well‐researched cognitive models of children's reading comprehension (Savage et al., 2015). The model states that reading comprehension ability is the product of decoding and language comprehension ability (Gough & Tunmer, 1986). Numerous studies have established the validity of this model through multiple regression analyses (Chen & Vellutino, 1997; Georgiou et al., 2009; Hoover & Gough, 1990; Savage, 2001, Savage, 2006) and structural equation modelling (Adlof et al., 2006; Chiu & Consortium, 2018; Foorman et al., 2015; Language and Reading Research Consortium, 2015; Lonigan et al., 2018; Vellutino et al., 2007). According to the SVR (Gough & Tunmer, 1986) reading comprehension difficulties could be due to an inability to decode, an inability to comprehend language or difficulties with both of these skills. Within the model, children who exhibit decoding difficulties in the absence of language comprehension difficulties are classified as having dyslexia. Children who exhibit difficulties in language comprehension in the absence of decoding difficulties are classified as having a specific comprehension difficulty (SCD). The mixed difficulty group demonstrates both decoding and language comprehension difficulties.
Gough and Tunmer (1986) argued that the identification of a group of readers who exhibit reading comprehension difficulties in the absence of decoding and language comprehension difficulties would falsify their hypothesis. Nevertheless, previous SVR classification studies have identified a group of children who exhibit reading comprehension difficulties that were not explained by decoding and/or language comprehension deficits (Aaron et al., 1999; Catts et al., 2003; Ebert & Scott, 2016; Morris et al., 2017). Catts et al. (2003) hypothesized that the identification of this group may be due to one of three reasons. First, it might indicate that some of the children who participated in previous SVR classification studies did not have reading difficulties. Second, it could be due to measurement error and/or the use of cut‐off lines to identify the poor reader groups. Finally, it could indicate that children in the unexplained poor reader group experience reading comprehension difficulties due to a third variable that is not included within the SVR. This possibility would falsify the SVR because it would indicate that a third variable needs to be added to the SVR model to provide a more full account of children's reading difficulties.
Previous SVR classification studies are subject to two main limitations. First, they have not established that the SVR can be used as a valid classification system. Limitations related to the sample size and participant recruitment criteria as well as analysis and interpretation limitations mean that further investigation is required. These limitations mean that previous classification studies may have either over or underestimated the proportion of children classified as having dyslexia or SCD. Previous studies have also been unable to rule out the possibility that there is a group of children whose reading difficulties cannot be explained by the SVR.
One of the greatest challenges faced by researchers who wish to conduct classification research is identifying a sufficiently large sample of poor readers. If a sufficient sample cannot be obtained, a limited number of children are likely to be assigned to the smaller poor reader categories, thus making it difficult to determine whether the proportion of children assigned to each poor reader category is representative of all poor readers. This means a large number of children must be screened to identify a sufficient number of struggling readers. For example, Aaron et al. (1999) screened 139 children to identify 16 poor readers. The three other classification studies took steps to overcome this challenge (Catts et al., 2003; Ebert & Scott, 2016; Morris et al., 2017). Ebert and Scott (2016) selected participants who had been referred for speech‐language assessments, and Catts et al. (2003) selected participants who were taking part in a longitudinal study on language impairments. These decisions increased the likelihood that the studies would identify a sufficient sample of poor readers because children with language difficulties are more likely than typically achieving children to exhibit reading difficulties (Cain & Oakhill, 2007; Catts et al., 2003). However, because these children are more likely to exhibit language difficulties, they may not be representative of all poor readers.
Ebert and Scott (2016) also included participants from a far wider age range than any of the other studies (6–16.7 years of age). However, this approach may have influenced the obtained results due to the increasing contribution that language comprehension makes to reading comprehension over time (Adlof et al., 2006; Catts, 2018; Georgiou et al., 2009), which may have influenced the proportion of children who were assigned to each poor reader group. Compared with studies with younger participants, studies that classify older children are likely to identify a larger proportion of children who exhibit the SCD profile and a smaller proportion of children who exhibit the dyslexia profile (Catts et al., 2005). This limitation can be mitigated by including participants who are similar in age or by reporting results for separate age groups when working with participants that span a wide age range.
Morris et al. (2017) included children who were performing below the 50th percentile on an end‐of‐year reading test. This means that at least some of the children included in their study were typically achieving readers. Catts et al. (2003) hypothesized that including typically achieving children within a classification study could be one of the reasons why an unexplained group of poor readers is identified. When typically achieving children are included within classification studies, they are likely to fall within the unexplained poor reader group because it is unlikely they will exhibit pronounced decoding and/or language comprehension difficulties.
Previous SVR classification studies have used cut‐off points on the decoding and language comprehension variables to distinguish between typically achieving children and children who struggle with these skills. Catts et al. (2003) conjectured that the use of cut‐off points could explain why a group of unexplained poor readers has been identified in previous classification studies. If the cut‐off point was raised in previous studies, children who had been assigned to the unexplained poor reader group would be reassigned to one of the other three poor reader groups.
SVR classification studies have largely failed to examine the cognitive profiles of the poor reader groups, although Catts et al. (2003) compared the poor reader groups across a number of cognitive processes. Analyses compared the groups on measures of decoding, language comprehension, reading comprehension, phonological awareness, rapid naming ability, and reading experience (title recognition test). The results reported in this research mostly fell in the expected direction. The dyslexia group performed more poorly than the SCD group on the decoding assessment and the SCD group performed more poorly than the dyslexia group on the language comprehension assessment, while the mixed difficulty group exhibited difficulties in both of these skills. However, the results of the phonological awareness and rapid naming assessments were not consistent with established expectations that children with dyslexia would demonstrate greater difficulties with these skills than children who exhibit only language comprehension difficulties (Lauterbach et al., 2017). Catts and colleagues found no significant difference between dyslexia and SCD groups on these assessments. This finding may be due to limitations associated with the operationalisation of variables and the age of the participants. Catts and colleagues operationalized rapid naming with a picture‐naming test, however operationalising rapid naming with an alphanumeric naming test may have resulted in different outcomes because of the close association between alphanumeric naming tests and reading (Araújo et al., 2015; Georgiou & Das, 2018). It may be that Catts and colleagues did not identify a difference in phonological awareness ability between the SCD and dyslexia groups because they only assessed children's ability to delete a syllable or sound and verbalize the remaining sound sequence. In contrast, Lauterbach et al. (2017) differentiated between participants (aged 7–20 years old) with dyslexia and specific language impairment (SLI) using a discriminant function analysis that included language comprehension, word attack, and phoneme deletion tests. Children with SLI share many similarities with children who exhibit the SCD profile, including difficulty understanding spoken language (Kelso et al., 2007), which suggests that the phonological awareness and rapid naming results reported by Catts and colleagues should be interpreted cautiously.
The current study sought to address the limitations associated with previous classification research by including a relatively large sample of children of a similar age who were not initially identified because of some other language or learning difficulty. The current study also grouped children in two ways. Firstly, by grouping children using the traditional cut‐off point approach and secondly, by grouping children using results from a cluster analysis. These groupings were used to determine whether the three group poor reader classification predicted by the SVR could be supported. Specifically, this research sought to answer the following questions. Does a two‐step cluster analysis approach provide a better explanation for the data than the traditional cut‐off point approach and do the poor reader groups demonstrate distinct cognitive profiles?
2. METHOD
2.1. Participants
The participants came from nine primary schools in an urban city in New Zealand. These children were in Grades 3, 4, and 5 (aged 8–10 years). Children in these grades were targeted given that reading comprehension ability has been found to be influenced, to a similar extent, by both decoding and language comprehension ability in this age range (Adlof et al., 2006; Catts, 2018; Georgiou et al., 2009). The schools were asked to identify children who performed below the 40th percentile on one of two school‐based standardized assessments commonly used within New Zealand: the e‐asTTle Reading test (Auckland UniServices Limited, 2009) or the Progressive Achievement Test for Reading Comprehension (Darr et al., 2008). Teachers were also allowed to nominate children who exhibited reading difficulties on other school assessments. All of the children identified were invited to take part in this research.
In total, 216 English‐speaking children took part in this study. Seven children performed above the 40th percentile on the researcher‐administered Passage Comprehension test from the Woodcock‐Johnson IV (WJIV; Schrank et al., 2014) and were excluded from the research, leaving a final sample of 209 children. The majority of the children in this study (73%) came from schools in higher socio‐economic communities and the average age of the participants was nine years and eight months (SDage = 11 months). Table 1 provides an overview of the participants broken down by grade and gender. This research adhered to the ethical requirements of the participating university.
TABLE 1.
Participant demographics
Grade | Males n (% of gender) | Females n (% of gender) | Total n (% of all participants) |
---|---|---|---|
3 | 35 (62.5%) | 21 (37.5%) | 56 (26.8%) |
4 | 49 (68.1%) | 23 (31.9%) | 72 (34.4%) |
5 | 46 (56.8%) | 35 (43.2%) | 81 (38.8%) |
Total | 130 (62.2%) | 79 (37.8%) | 209 (100.0%) |
2.2. Procedure and measures
All children undertook the same 14 individually administered assessments. The assessments were completed over four separate sessions within a two‐week period for each child. The assessment sessions lasted approximately 20 minutes each and were completed in a quiet room at each school. All tests were administered by the first author. The data were collected between March 2019 and March 2020. For reliability purposes, a second marker reviewed 20% of the assessment record sheets. No discrepancies between markers were identified during this process.
2.2.1. Decoding/Word reading
Decoding ability was assessed using the Letter‐Word Identification, Word Attack, and Word Reading Fluency tests from the WJIV (Schrank et al., 2014) and the Burt Word Recognition Test (Burt test; Gilmore et al., 1981). The Letter‐Word Identification test assessed children's ability to identify and pronounce individual letters and words. The Word Attack test assessed children's ability to pronounce non‐words that conform to English spelling rules. The Letter‐Word Identification (r = .95) and Word Attack (r = .87) tests demonstrated excellent reliability within this sample. These figures were similar to those reported in the WJIV manual (Schrank et al., 2014; Letter‐Word Identification = .92, Word Attack = .90). Both tests were administered following the procedures described in the WJIV manual and were stopped when a child made six consecutive errors.
The Burt test assessed children's ability to read a range of regular and irregular words that increased in length and complexity. Test administration was terminated when children were unable to correctly read 10 consecutive items. The test manual reports high internal consistency (.97) within the 8.03–10.09 age range (Gilmore et al., 1981).
Reading fluency was assessed using the Word Reading Fluency test from the WJIV. This test assessed children's ability to quickly read rows of words and circle the two words that go together. Children had three minutes to complete as many questions as possible. Each correctly answered question received one mark. The administration manual reports median reliability of .92 in the 7–11 age range (Schrank et al., 2014).
2.2.2. Language comprehension
Language comprehension was measured using the Oral Comprehension test from the WJIV Oral Language battery and the Oral Vocabulary test from the WJIV Cognitive battery. The Oral Comprehension test required children to listen to short passages and then supply a missing final word to each. The Oral Vocabulary test required children to provide synonyms and antonyms for orally presented words. The tests demonstrated excellent reliability within this sample (Oral Comprehension = .75; Oral Vocabulary = .84). These figures are similar to those reported in the WJIV manual (Schrank et al., 2014; Oral Comprehension = .82, Oral Vocabulary = .89). The tests were administered following the procedures outlined in the WJIV manual and were stopped when the child made six consecutive errors.
The British Picture Vocabulary Scale, third Edition (BPVS‐III; Dunn et al., 2009) was administered to assess children's receptive vocabulary. Children were required to identify one picture from a selection of four that represented an orally presented word. A reliability figure of .91 has been reported for this measure (Dockrell & Marshall, 2015). This test was discontinued once children made eight or more errors in a set of 12 items.
2.2.3. Rapid naming
Children's rapid automatic naming speed was assessed using the Rapid Digit Naming and Rapid Letter Naming tests from the Comprehensive Test of Phonological Processing (CTOPP‐2; Wagner et al., 2013). These tests assessed how quickly children could name an array of digits or letters. The Examiners Manual reported excellent test–retest reliabilities for these tests within the 7–11 age range (Rapid Digit Naming test = .9; Rapid Letter Naming test = .93).
2.2.4. Phonological awareness
The phonological awareness construct was assessed using the Phonological Processing test from the WJIV and the Blending Words, Phoneme Isolation, and Elision tests from the CTOPP‐2. The Phonological Processing test is composed of three subtests. The first subtest assessed children's ability to name words with certain sounds in a specific location within the word. Initial items in this test asked children to name a word that began with a specific sound, while later items asked children to name words with a specific sound in the middle or end of words. The second subtest assessed children's ability to rapidly name words that started with a certain sound. Children had two attempts to name as many words as possible within one minute. The initial sound varied across the two presentations. The final subtest required children to substitute a sound in a word for another sound, to create a new word. For example, one of the items at the beginning of this test asked children to change the t sound in tag to b. The test has been reported to have a median reliability of .83 in the 5–19 age range (Schrank et al., 2014). Children proceeded through subtests one and three until they made six consecutive errors.
The Blending Words test, from the CTOPP‐2 assessed children's ability to combine sounds to form words. These items increased in length and complexity as the test progressed. The Phoneme Isolation test assessed children's ability to identify specific sounds within words. For example, children were asked to identify the last sound in laugh. The Elision test assessed children's ability to delete a sound within a word to create a new word: say cup without saying k. The Examiners Manual reported reliability coefficients greater than .93 on the CTOPP‐2 tests (Wagner et al., 2013). These tests were discontinued when a child made three consecutive errors.
2.2.5. Standard and composite scoring
Initially, raw scores from the WJIV subtests, the CTOPP‐2 subtests, and the BPVS‐III were converted to standard scores using the relevant administration manuals or conversion software. Because the Burt test does not report standard scores, we calculated standard scores using the mean and standard deviation from a study that administered this assessment to a similar age group of children who were representative of all ability levels (see Mandelaine & Wheldall, 1998). A composite decoding score was calculated by finding each child's average standard score on the Word Attack and Letter Word Identification tests. A composite language comprehension score was derived by calculating each child's average score on the Oral Comprehension and Oral Vocabulary tests. These composite scores were used to classify children. Catts et al. (2003) hypothesized that the identification of an unexplained group of poor readers could be due to measurement error. To ensure the four aforementioned tests provided a reliable indication of children's decoding and language comprehension ability, Chronbach's Alpha reliability scores were calculated for this sample and have been reported within the above sections. All the analyses described below were also performed using weighted composite decoding and language comprehension scores based on a principal components analysis. The Word Attack, Letter Word Identification, Word Reading Fluency, and Burt tests contributed to the weighted decoding score. The Oral Comprehension, Oral Vocabulary, and BPVS‐III tests contributed to the weighted language comprehension score. These analyses were undertaken to investigate whether assessing decoding and language comprehension with a broad range of weighted scores resulted in findings that were substantially different from those obtained using unweighted composite scores. Because these results did not differ materially from the analyses conducted using the unweighted composite scores, they are presented in the supplementary material. All analyses were performed using IBM SPSS Statistics for Windows, version 26.0.
3. RESULTS
Table 2 reports the mean and standard deviation in raw scores for each test that was administered. The scores for the Rapid Digit Naming and Rapid Letter Naming tests were measured in seconds, with faster response times indicating better performance than slower response times. In total, 209 children completed 13 of the 14 assessments. Three children were unable to complete the practice items on the Word Reading Fluency test. In accordance with the instruction manual, the test items from the Word Reading Fluency test were not administered to these three children. Two of these children were in Grade 5 and the third child was in Grade 3. These children were included in all classification analyses and all comparison analyses, aside from analyses that compared children's scores on the Word Reading Fluency test.
TABLE 2.
Descriptive statistics
Test | Construct | Year 4 M (SD) | Year 5 M (SD) | Year 6 M (SD) | Total M (SD) |
---|---|---|---|---|---|
Passage Comp a | Reading comprehension | 22.38 (3.70) | 23.92 (3.71) | 26.94 (4.46) | 24.67 (4.42) |
Word Attack a | Decoding | 12.73 (3.70) | 15.83 (4.56) | 17.17 (4.82) | 15.52 (4.78) |
Letter‐Word Identification a | Decoding | 40.48 (7.48) | 44.76 (8.63) | 49.56 (9.09) | 45.47 (9.24) |
Burt | Decoding | 37.71 (11.10) | 47.64 (14.47) | 57.93 (16.94) | 48.97 (16.74) |
Word Reading Fluency a | Decoding | 19.24 (8.18) | 25.82 (10.82) | 33.80 (8.91) | 27.12 (11.08) |
Oral Comprehension a | Language comprehension | 13.59 (3.72) | 14.47 (3.11) | 16.44 (3.11) | 15.00 (3.48) |
Oral Vocabulary a | Language comprehension | 14.77 (4.64) | 17.24 (4.44) | 19.98 (4.72) | 17.64 (5.04) |
BPVS‐III | Language comprehension | 98.59 (16.80) | 106.40 (15.36) | 115.69 (13.80) | 107.91 (16.62) |
Phonological Processing a | Phonological awareness | 27.27 (8.28) | 30.14 (8.08) | 35.01 (8.37) | 31.26 (8.81) |
Elision b | Phonological awareness | 16.59 (4.28) | 18.90 (5.48) | 21.91 (6.39) | 19.45 (5.96) |
Blending Words b | Phonological awareness | 16.75 (5.36) | 18.15 (4.09) | 19.11 (5.50) | 18.15 (5.08) |
Phoneme Isolation b | Phonological awareness | 19.86 (6.43) | 18.40 (6.10) | 20.78 (6.25) | 19.71 (6.30) |
Rapid Digit Naming b | Rapid naming | 22.30 (6.67) | 20.22 (5.26) | 17.75 (4.76) | 19.82 (5.77) |
Rapid Letter Naming b | Rapid naming | 25.29 (6.80) | 22.35 (6.51) | 19.44 (8.29) | 22.01 (7.65) |
Note: 206 children completed the Word Reading Fluency test (Year 4 = 55, Year 5 = 72, Year 6 = 79). 209 students completed the remaining tests (Year 4 = 56, Year 5 = 72, Year 6 = 81).
Rapid Digit Naming and Rapid Letter Naming scores are measured in seconds. All other test units are number of correct responses (raw scores).
Test from the WJIV.
Tests from the CTOPP‐2.
3.1. Traditional classification approach
Children were initially classified using the traditional classification approach. As in previous studies (Aaron et al., 1999; Catts et al., 2003), cut‐off lines were placed one standard deviation below the mean (standard score = 85) on the decoding and language comprehension variables (see Figure 1). This resulted in 23% of children being assigned to the mixed difficulty group (decoding and language comprehension <85), 22% being assigned to the dyslexia group (decoding <85, language comprehension ≥85), 24% being assigned to the SCD group (decoding ≥85, language comprehension <85), and 31% being assigned to the unexplained poor reader group (decoding and language comprehension ≥85). A multinomial logistic regression analysis was then conducted. This type of analysis is used to model the predictive relationship between independent variables and dependent unordered categorical variables. In this research, the poor reader groups were the dependent variable and the independent variables were the tests that were administered, apart from the four tests that contributed to the decoding and language comprehension composite scores. Initially, the Passage Comprehension, Word Reading Fluency, Burt, Rapid Digit Naming, Rapid Letter Naming, Elision, Phonological Processing, Blending Words, Phoneme Isolation, and BPVS‐III tests were entered into the analysis. Tests were then removed if they did not contribute significantly to the model. Language comprehension (BPVS‐III; χ2 = 45.869, p < .001), decoding (Burt; χ2 = 26.352, p < .001), reading comprehension (Passage Comprehension; χ2 = 30.053, p < .001) and phonological awareness (Elision; χ2 = 30.139, p < .001) tests contributed significantly to the model. This model accurately predicted assignment for 60.0% of cases in the SCD group, 75.5% of cases in the mixed difficulty group, 55.6% of cases in the dyslexia group, and 69.2% of cases in the unexplained poor reader group. Overall, this model was able to accurately predict group membership for 65.6% of cases.
FIGURE 1.
Traditional classification approach
3.2. Cluster analysis approach
In this approach, children were classified using a two‐step cluster analysis that used log‐likelihood as the distance measure. Cluster analyses aim to maximize the homogeneity within groups and maximize the heterogeneity between groups. The two‐step cluster analysis grouped children using two steps based on their performance on the decoding and language comprehension variables. The aim of the first step was to calculate a new data matrix with fewer cases for the second step in the clustering process. The program examined every record and decided whether that record should be merged with a previously formed group of records (pre‐cluster) or whether it should form the basis for a new pre‐cluster based on the log‐likelihood distance criterion. In the second step, the program took these pre‐clusters and grouped them into the desired number of clusters using a stepwise hierarchical clustering algorithm. In this analysis, the program was allowed to determine the optimal number of groupings. It did this in two steps. First, the program used the Bayesian Information Criterion (BIC) to estimate the number of clusters in the data by identifying the point at which the decrease in BIC started to diminish as the number of clusters increased. Models with small BIC are considered good models. The second step calculated the ratio change in BIC for the merging of each cluster. A large change in the ratio indicates the merging of two clusters that should not be merged. The program used this information to adjust the initial estimate to identify the most likely number of clusters (for a more detailed description of the two‐step cluster analysis procedure refer to Bacher et al., 2004 and Chiu et al., 2001).
The cluster analysis identified three poor reader groups with 17% of children assigned to the mixed difficulty group, 39% of children assigned to the dyslexia group, and 44% of children assigned to the SCD group. As shown in Figure 2, this approach did not identify a group of children who exhibited an unexplained poor reader profile (see Figure 2). A multinomial logistic regression analysis was again conducted following the same process used with the traditional classification approach. Tests that assessed language comprehension ability (BPVS‐III; χ2 = 44.335, p < .001), decoding ability (Burt; χ2 = 18.045, p < .001), reading comprehension (Passage Comprehension test; χ2 = 11.591, p = .003) and phonological awareness (Elision; χ2 = 25.449, p < .001) contributed significantly to the model. The multinomial logistic regression analysis indicated that group membership could be predicted with greater accuracy when children had been grouped using the cluster analysis approach (74.2%) rather than the traditional classification approach (65.6%). The model was able to accurately predict assignment for 78.5% of cases to the SCD group, 69.1% of cases to the dyslexia group, and 74.3% of cases to the mixed difficulty group. This level of accuracy was superior to that obtained when children were grouped using the traditional classification approach (SCD = 60%; dyslexia = 55.6%; mixed difficulty group = 75.5%).
FIGURE 2.
Cluster analysis approach
A second cluster analysis was conducted to determine whether a four‐group model also provided a good fit for the data. In this analysis, the program was not allowed to identify the optimal number of groupings (three groups, see previous analysis). Instead, the program was forced to identify four groups of poor readers. Using this approach, 13% of poor readers were assigned to the mixed difficulty group, 35% were assigned to the dyslexia group, 32% were assigned to the SCD group, and 20% were assigned to the unexplained poor reader group. A multinomial logistic regression analysis confirmed that group membership could be predicted more accurately using this approach than the traditional classification approach. Analysis identified that this approach was not an improvement on the cluster analysis approach that identified three groups. Overall, 69.4% of cases could be predicted accurately. The accuracy with which group membership could be predicted for the mixed difficulty (75.0%) and dyslexia (73.0%) groups was similar to that of the three‐group cluster analysis approach, although the SCD group was predicted less accurately using this approach (63.6%) in comparison to the three‐group approach (78.5%). The unexplained poor reader group was accurately predicted for 68.3% of cases. However, this does not mean that an unexplained poor reader group exists because the cluster analysis was forced to identify this group.
A further cluster analysis was undertaken using composite decoding and language comprehension scores based on weights obtained from a principal component analysis. This analysis was undertaken to investigate whether weighted test scores provided a better indication of the children's decoding and language comprehension ability. In the previous analyses, only two tests contributed to the decoding (Word Attack and Letter‐Word Identification tests) and language comprehension (Oral Comprehension and Oral Vocabulary tests) variables. The factor scores used in the current analyses included a broader range of tests to enable the breadth of these constructs to be examined in greater detail. The Word Attack, Letter‐Word Identification, Word Reading Fluency, and Burt tests contributed to the decoding factor. The Oral Comprehension, Oral Vocabulary, and BPVS‐III tests contributed to the language comprehension factor. The factor loadings are shown in Table 1 of the supplementary material. A two‐step cluster analysis identified the three poor reader groups predicted by the SVR model (dyslexia, SCD, and mixed difficulty). Figure 3 displays the proportion of children assigned to each poor reader category. The distribution was similar to that reported in Figure 2. Because of the similarity between these approaches, the results associated with this analysis have been reported in the supplementary material. In subsequent sections, only the results associated with the cluster analysis approach based on the composite decoding (Word Attack and Letter‐Word Identification tests) and language comprehension variables (Oral Comprehension and Oral Vocabulary tests) are reported in the text.
FIGURE 3.
Classification by cluster analysis using factor scores
3.3. Differences between groups
The analyses here investigated whether the poor reader groups demonstrated distinct cognitive profiles. These analyses are based on the three‐group cluster analysis approach, as this approach provided the best fit for the data. Table 3 provides an overview of each group's performance on the 14 tests that were administered in this research. A one‐way between subjects multivariate analysis of variance was carried out to examine the impact of group assignment on test performance. The between‐subjects factor comprised the three poor reader groups: dyslexia, SCD, and mixed difficulty. The dependent variable comprised children's scores on the 14 tests. There was a significant difference between the groups on the combined dependent variable, F(28,380) = 20.581, p < .001; Wilks’ lambda = .158. ANOVA was conducted to determine whether there were significant differences between the groups on these tests. Where the assumption of equal variance was not satisfied, the results from Welch and Brown‐Forsythe tests are also reported. Post hoc tests were conducted where significant differences were identified. Where equal variance was assumed, Tukey's honestly significant difference test was conducted. Games‐Howell post hoc tests were conducted where equal variance was not assumed.
TABLE 3.
Comparisons by poor reader group based on the two‐step cluster analysis approach (3 groups)
Test | Group | N | M | SD | Significant differences |
---|---|---|---|---|---|
(a) | |||||
Passage Comp F(2,206) = 54.246, p < .001 Welch: (2,79.898) = 26.729, p < .001 Brown–Forsythe: (2,68.798) = 38.385, p < .001 |
Mixed | 35 | 62.03 | 14.72 |
Mixed < Dyslexia a Mixed < SCD a |
Dyslexia | 81 | 79.95 | 9.24 | ||
SCD | 93 | 81.10 | 7.32 | ||
Word Attack F(2,206) = 91.312, p < .001 Welch: (2,80.379) = 72.806, p < .001 Brown–Forsythe: (2, 60.183) = 60.506, p < .001 |
Mixed | 35 | 66.89 | 17.62 |
Mixed < Dyslexia a Mixed < SCD a Dyslexia < SCD a |
Dyslexia | 81 | 81.53 | 8.74 | ||
SCD | 93 | 94.61 | 8.78 | ||
Letter‐Word Identification F(2,206) = 88.016, p < .001 Welch: (2,80.414) = 64.280, p < .001 Brown–Forsythe: (2,66.524) = 61.399, p < .001 |
Mixed | 35 | 66.29 | 15.89 |
Mixed < Dyslexia a Mixed < SCD a Dyslexia < SCD a |
Dyslexia | 81 | 82.00 | 9.34 | ||
SCD | 93 | 92.73 | 8.05 | ||
Burt F(2,206) = 25.716, p < .001 |
Mixed | 35 | 73.68 | 8.54 |
Mixed < Dyslexia a Mixed < SCD a Dyslexia < SCD a |
Dyslexia | 81 | 82.54 | 9.91 | ||
SCD | 93 | 91.74 | 8.69 | ||
Word Reading Fluency F(2,203) = 25.438, p < .001 |
Mixed | 32 | 75.25 | 9.79 |
Mixed < Dyslexia c Mixed < SCD c Dyslexia < SCD c |
Dyslexia | 81 | 85.77 | 11.95 | ||
SCD | 93 | 91.18 | 10.46 | ||
Oral Comprehension F(2,206) = 91.220, p < .001 Welch: (2,82.511) = 90.537, p < .001 Brown–Forsythe: (2,74.533) = 69.929, p < .001 |
Mixed | 35 | 71.97 | 7.13 |
Mixed < Dyslexia a Mixed < SCD a Dyslexia > SCD a |
Dyslexia | 81 | 96.05 | 7.13 | ||
SCD | 93 | 82.16 | 9.43 | ||
Oral Vocabulary F(2,206) = 99.554, p < .001 Welch: (2,84.122) = 67.942, p < .001 Brown–Forsythe: (2,87.378) = 80.364, p < .001 |
Mixed | 35 | 63.00 | 12.78 |
Mixed < Dyslexia a Mixed < SCD a Dyslexia > SCD a |
Dyslexia | 81 | 90.40 | 9.30 | ||
SCD | 93 | 82.14 | 8.41 | ||
BPVS‐III F(2,206) = 32.904, p < .001 Welch: (2,103.177) = 32.517, p < .001 Brown–Forsythe: (2,173.897) = 37.237, p < .001 |
Mixed | 35 | 74.83 | 6.94 |
Mixed < Dyslexia a Mixed < SCD a Dyslexia > SCD a |
Dyslexia | 81 | 88.83 | 11.66 | ||
SCD | 93 | 79.75 | 8.26 | ||
(b) | |||||
Phonological Processing F(2,206) = 15.382, p < .001 |
Mixed | 35 | 70.03 | 12.20 |
Mixed < Dyslexia c Mixed < SCD c |
Dyslexia | 81 | 81.28 | 11.93 | ||
SCD | 93 | 83.35 | 12.60 | ||
Elision F(2,206) = 57.750, p < .001 |
Mixed | 35 | 74.14 | 8.62 |
Mixed < Dyslexia c Mixed < SCD c Dyslexia < SCD c |
Dyslexia | 81 | 81.54 | 8.54 | ||
SCD | 93 | 91.77 | 9.52 | ||
Blending Words F(2,206) = 10.784, p < .001 |
Mixed | 35 | 72.29 | 13.08 |
Mixed < Dyslexia c Mixed < SCD c |
Dyslexia | 81 | 83.15 | 13.750 | ||
SCD | 93 | 83.60 | 12.10 | ||
Phoneme Isolation F(2,206) = 10.863, p < .001 Welch: (2,101.270) = 13.390, p < .001 Brown–Forsythe: (2,173.949) = 11.976, p < .001 |
Mixed | 35 | 75.71 | 9.17 |
Mixed < Dyslexia a Mixed < SCD a |
Dyslexia | 81 | 86.54 | 13.22 | ||
SCD | 93 | 83.12 | 10.60 | ||
Rapid Digit Naming F(2,206) = 23.228, p < .001 |
Mixed | 35 | 85.14 | 9.81 |
Mixed < SCD c Dyslexia < SCD c |
Dyslexia | 81 | 89.69 | 8.56 | ||
SCD | 93 | 96.99 | 10.59 | ||
Rapid Letter Naming F(2,206) = 22.420, p < .001 |
Mixed | 35 | 84.71 | 9.70 |
Mixed < SCD c Dyslexia < SCD c |
Dyslexia | 81 | 88.70 | 8.02 | ||
SCD | 93 | 95.38 | 9.42 |
Note: Significant differences between groups are recorded in the right‐hand column. Greater than and less than signs denote the direction of these differences.
Significant difference identified using both Tukey's honestly significant difference and Games–Howell Post hoc tests.
Significant difference identified using Games–Howell post hoc test only.
Significant difference identified using Tukey's honestly significant difference post hoc test only.
The dyslexia group performed significantly better than the SCD group on the Oral Comprehension, Oral Vocabulary, and BPVS‐III tests. The SCD group performed significantly better than the dyslexia group on the Word Attack, Letter‐Word Identification, Word Reading Fluency, Elision, Rapid Digit Naming, Rapid Letter Naming, and Burt tests. While these groups exhibited distinct cognitive profiles, they performed at a similar level on the reading comprehension test (around the ninth percentile). The mixed difficulty group demonstrated pronounced reading comprehension difficulties. Their average standard score placed them within the bottom first percentile on this test. They also performed significantly worse than the SCD group on every test and significantly worse than the dyslexia group on all but the two rapid naming tests. The relative strengths and weaknesses exhibited by these groups can be seen in Figure 4. This figure is based on the results reported in Table 3.
FIGURE 4.
Poor reader profiles based on the two‐step cluster analysis approach (3 groups)
4. DISCUSSION
The results indicated that children with reading comprehension difficulties can be assigned to dyslexia, SCD, or mixed difficulty category using a two‐step cluster analysis approach. In contrast to the traditional approach, this approach did not identify an unexplained group of poor readers. A second cluster analysis based on composite decoding and language comprehension scores derived from weights obtained from a principal component analysis identified the same poor reader groups. Multinomial logistic regression analyses predicted group assignment with greater accuracy when children were classified into one of the three poor reader groups predicted by the SVR using the cluster analysis approach rather than the traditional classification approach, which identified four poor reader groups. These results provide additional support for the SVR model as a way of identifying reading difficulties.
4.1. Prevalence of poor reader profiles
As expected, the children did not fall into distinct poor reader subgroups. This finding is consistent with previous SVR classification studies, which have found that poor readers are distributed across the lower distribution of the decoding and language comprehension variables (Aaron et al., 1999; Catts et al., 2003; Ebert & Scott, 2016; Morris et al., 2017). Notwithstanding this observation, sufficient similarities exist to assign children to relatively homogeneous subgroups based on their decoding and language comprehension proficiency.
Previous classification studies have used cut‐off points on the decoding and language comprehension variables to distinguish between typically achieving children and children who struggle with these skills. Catts et al. (2003) conjectured that the use of cut‐off points could explain why a group of unexplained poor readers has been identified in previous classification studies. Decoding and language comprehension ability fall on a continuum, which means there is no obvious cut‐off point that can be used to distinguish between poor and typically developing readers. When children are separated using cut‐off points, research has identified the three poor reader groups predicted by the SVR and a group of unexplained poor readers. However, if higher cut‐off points were used, no children would fall within the unexplained poor reader category.
The placement of cut‐off points influences the proportion of poor readers that are assigned to each poor reader category. If the cut‐off points were raised from their traditional placement fewer children would be assigned to dyslexia and SCD groups. Children in the SCD group who performed just above the decoding cut‐off point and children in the dyslexia group who performed just above the language comprehension cut‐off point would be assigned to the mixed difficulty group if cut‐off points were raised. Conversely, a greater proportion of struggling readers would be assigned to dyslexia and SCD groups if the cut‐off points were lowered. Classification based upon a cluster analysis is less susceptible to this limitation because cluster analyses aim to maximize homogeneity within groups and do not rely on cut‐off points.
This research found that a greater proportion of children could be assigned to dyslexia (39%) and SCD (44%) groups in the cluster analysis approach than in the traditional classification approach (22% and 24% respectively). As a result, fewer students were assigned to the mixed difficulty group in the cluster analysis approach (17%) than in the traditional classification approach (23%) and no unexplained poor reader group was identified. This finding lends support to Catts et al. (2003) hypothesis that the identification of an unexplained poor reader group may be an artefact of the classification approach used in previous studies. Specifically, using cut‐off points to classify poor reads may erroneously lead to the identification of the fourth group of poor readers, whose difficulties are not explained by the SVR. When an alternative classification approach was implemented based on a cluster analysis, rather than the cut‐off points, an unexplained group of poor readers was not identified. This finding suggests that a relatively large proportion of children who exhibit the dyslexia, SCD, or mixed difficulty profile are assigned to the unexplained poor reader group when cut‐off points are used to identify these groups. As a result, these children may be excluded from programmes that target children exhibiting dyslexia, SCD, or mixed difficulty profiles, leading to inequitable outcomes for these children. Future research should attempt to replicate the results from these analyses with children similar in age to those in this study. Whilst it seems likely that a three‐group model will also explain the reading comprehension difficulties exhibited by older children, this must be confirmed through future research.
Traditionally, New Zealand has opposed the use of labels such as dyslexia because of concerns that the use of labels may stigmatize some ethnicities who are more likely to exhibit reading difficulties (Tunmer & Chapman, 2007). Formal assessments for dyslexia and other learning difficulties can only occur outside the school system in New Zealand (New Zealand Qualifications Authority, n.d.), and no funding is provided for these assessments. As a result, few children in New Zealand receive a formal diagnosis of dyslexia or other learning difficulties. It would be interesting to investigate whether children who receive a formal diagnosis of dyslexia or a SCD fall within this category when children are classified using the cluster analysis approach described in this research. Previous research found that group membership derived through clustering was not a good predictor of diagnostic labels (Astle et al., 2019; Bradshaw et al., 2021). However, these studies used different clustering procedures than those used in this research and included a more diverse range of learning difficulties. Research investigating the relationship between the classification approach used in this research and formal learning difficulty assessments may need to be undertaken outside New Zealand in a country where formal diagnoses of learning difficulties are more common.
4.2. Cognitive profiles of poor reader groups
A valid classification approach should be able to differentiate between groups of poor readers (Catts et al., 2003). The results indicate that dyslexia, SCD, and mixed difficulty groups exhibit distinct cognitive profiles. Children in the dyslexia group performed significantly more poorly than children in the SCD group on tests that assessed decoding (Word Attack, Letter‐Word Identification, Word Reading Fluency, and Burt tests), rapid naming (Rapid Digit Naming and Rapid Letter Naming tests), and phoneme deletion (Elision test) ability. In contrast, they performed significantly better than the SCD group on all the language comprehension measures (Oral Comprehension, Oral Vocabulary, and BPVS‐III tests). In addition, the multinomial logistic regression analyses showed that it was possible to accurately discriminate between the three poor reader groups using tests that assessed decoding (Burt test), language comprehension (BPVS‐III), reading comprehension (Passage Comprehension test), and phoneme deletion (Elision test) ability.
All the poor reader groups exhibited phonological awareness difficulties. However, the dyslexia group exhibited significantly greater difficulties than the SCD group on the phoneme deletion test (Elision test). The ability to identify and manipulate phonemes is essential for skilled decoding within an alphabetic orthography (Ehri, 2014; Wren, 2001). Children must be able to identify individual phonemes within words when developing mental representations of a word in their mind and when converting graphemes to phonemes when reading new or unfamiliar words (Arrow & Tunmer, 2012; Diamanti et al., 2018; Kendeou et al., 2014; Tunmer & Hoover, 2019). Because the ability to identify and manipulate phonemes is essential for skilled decoding, it is not surprising that children with dyslexia exhibit difficulties with this skill. In contrast, children who exhibit the SCD profile are less likely to exhibit phoneme manipulation difficulties than children with dyslexia because they are less likely to demonstrate decoding difficulties.
Rapid naming ability did not add predictive utility to the multinomial logistic regression model. However, children in the dyslexia group did demonstrate significantly greater difficulties on the rapid naming tests than children in the SCD group. This suggests that while rapid naming ability may not be a useful variable for differentiating between all three groups of poor readers, it can be used to discriminate between children in dyslexia and SCD groups. Children in dyslexia and mixed difficulty groups performed significantly more poorly than children in the SCD group on the rapid naming tests. These two groups of children also performed significantly more poorly on the decoding tests than children in the SCD group.
Children's age may mediate the relationship between group assignment and rapid naming ability. In this research, children in the dyslexia group performed significantly worse on the rapid naming tests than children in the SCD group. Some research suggests that rapid naming difficulties may become more prominent over time in children with dyslexia (Araújo & Faísca, 2019). This suggests that the profile that children in the SCD and dyslexia groups exhibit on rapid naming tests may vary over time. The differences between these groups may be more pronounced in older children and less pronounced in children who are younger than those who participated in this research.
Research has shown that rapid naming tests are more strongly related to reading fluency tests than reading accuracy tests across a range of orthographies (Araújo et al., 2015). This finding is consistent with the results reported in this study. Children in the dyslexia group performed significantly more poorly than children in the SCD group on the rapid naming and reading fluency assessments. This may be because reading fluency and rapid naming rely on some of the same cognitive processes. Both skills require attention to stimuli, visual processes used to identify and discriminate between letters, integration of visual information with stored orthographic and phonological representations, access and retrieval of phonological codes, and articulatory output (Araújo et al., 2015).
In the current study, the SCD group performed poorly on phonological awareness measures but not rapid naming measures. This suggests that the rapid naming difficulties exhibited by dyslexia and mixed difficulty groups are not due solely to phonological awareness difficulties. In addition, previous research has found that rapid naming ability is a good predictor of reading ability even after controlling for phonological awareness (Araújo & Faísca, 2019). This relationship has been found across writing systems (Araújo & Faísca, 2019) and orthographies of varying complexity (Frith et al., 1998; Handler & Fierson, 2011). These findings indicate that rapid naming ability relies, in part, on cognitive processes other than phonological awareness. These must be lower‐level cognitive processes or skills that contribute to both alphanumeric naming speed and reading fluency but not phonological awareness.
The SCD group performed at a similar level to that of their typically achieving peers on all the tests that measured decoding ability (Word Attack, Letter‐Word Identification, Word Reading Fluency, and Burt tests), the phoneme deletion test (Elision test) and both rapid naming tests. In contrast, these children demonstrated difficulties on all the tests that assessed language comprehension ability (Oral Comprehension, Oral Vocabulary, and BPVS‐III tests) and three other tests that measured phonological awareness ability (Phonological Processing, Blending Words and Phoneme Isolation tests). Therefore, broad language difficulties are the defining characteristic of children in this group.
Children are hardwired to learn language. They do not need explicit instruction to become proficient with their own language in all but the most language‐deprived backgrounds (Bishop & Snowling, 2004). Although most home environments are sufficient for language development, not all children are exposed to the formal decontextualized language that appears in print (Beck et al., 2013). It is possible that the difficulties exhibited by children in the SCD and mixed difficulty groups are due, in part, to limited experience with the language used in written texts (Beck et al., 2013).
It is also possible that a variable not assessed in this research could be the root cause of the difficulties exhibited by this group. For example, in addition to the difficulties identified in this study, children with language comprehension difficulties have been found to demonstrate impairment on tests that assess syntax knowledge, auditory perception, and verbal working memory (Leonard, 2014). It is unlikely that one of these variables alone is the root cause of the language difficulties exhibited by the SCD group because many of the difficulties noted above have also been observed in children with dyslexia (Bishop & Snowling, 2004; Diamanti et al., 2018; Lauterbach et al., 2017). Therefore, it is likely that a combination of these factors contributes to the language comprehension difficulties exhibited by this group. It is also likely that the relative importance of these factors varies from person to person. For example, the primary cause of one child's language comprehension difficulties may be due to an impoverished home language environment while another child's difficulties may be due, primarily, to impaired syntactic knowledge, or difficulties with one or more of the other cognitive processes associated with comprehension. Neurobiological and etiological factors that include differences in brain structure, as well as associated genetic and environmental causes, will influence phonological processing, which will in turn influence performance at the behavioural level during reading and spelling (Bishop & Snowling, 2004; Thapar & Rutter, 2015). Previous research has also found that the comprehension difficulties exhibited by poor comprehenders are not due to one fundamental weakness (Cain & Oakhill, 2007). This suggests that educators should first seek to identify what factors contribute to a child's language comprehension difficulties. They must be mindful that similar performance difficulties at the behavioural level can be due to different underlying difficulties. Once these factors have been identified, an instructional programme should be devised to target these difficulties.
The results from this study show that children who exhibit the SCD profile perform at a similar level to that of children in the dyslexia group on reading comprehension measures. Their average score on the Passage Comprehension test placed them in the lowest 10th percentile of all readers. Although they performed at a similar level to children in the dyslexia group, they may be far more difficult for teachers to identify in the early primary years. Early reading instruction focuses on the development of decoding skills (Castles et al., 2018). It is likely that children who struggle with decoding will quickly come to the attention of teachers. These difficulties are characteristic of learners in either dyslexia or the mixed difficulty groups. In contrast, children in the SCD group may not be identified by teachers because their decoding ability is similar to that of their typically developing peers. In addition, their language comprehension difficulties may not yet be apparent because the demands placed on them by instructional texts used with this age group do not yet exceed their language comprehension ability (Georgiou et al., 2009).
Research has found that children with language difficulties can make significant gains, and maintain them, in reading comprehension if they are provided with a programme that targets their oral language difficulties (e.g., Bowyer‐Crane et al., 2008; Clarke et al., 2010). It is not just children with SCD that will benefit from this type of instruction. Both the mixed difficulty and SCD groups demonstrated language comprehension difficulties. These categories may include over 80% of all struggling readers. Children with language difficulties are often not identified in schools because of a misconception that children who can participate in social conversations have the necessary language skills to comprehend written text (Adlof & Hogan, 2019). In addition, school assessments may not be sufficiently sensitive to identify children with language difficulties (Adlof & Hogan, 2019). For these reasons, the most efficacious approach may be to ensure that all children are provided with reading instruction that also targets language comprehension skills.
Children in the mixed difficulty group demonstrated difficulties across all the tests that were administered in this research. They also performed significantly more poorly than children in the SCD and dyslexia groups on almost all of the assessments. The SVR predicts that children who perform poorly on both the decoding and the language comprehension variables will perform more poorly on measures of reading comprehension than children who perform poorly on only one of these variables. Notwithstanding this prediction, it was surprising how difficult this group found the reading comprehension test. Whereas the average score on the reading comprehension assessment for dyslexia and SCD groups fell within the bottom 10th percentile, the average score for the mixed difficulty group fell within the bottom first percentile. This finding is consistent with other studies (Catts et al., 2003; Tunmer & Chapman, 2007) and indicates that the mixed difficulty group exhibit substantially greater reading comprehension difficulties than children in the other two poor reader groups. These results suggest that the children in the mixed difficulty group may require far greater support than children in the two other poor reader categories. This support should include reading programmes designed to address their reading difficulties. They may also require other accommodations to mitigate the reading comprehension difficulties they experience.
5. CONCLUSION
This research indicates that struggling readers can be assigned to one of the three poor reader groups predicted by the SVR: dyslexia, SCD, and mixed difficulty. These results support predictions that the identification of an unexplained poor reader group may be due to the methodology used to classify struggling readers. The three poor reader groups exhibited distinct cognitive profiles characterized by relative strengths and weaknesses in decoding, language comprehension, phonological awareness, and rapid naming ability. Although relative strengths and weaknesses were identified between groups, the average scores for all groups fell below those of typically achieving children on the decoding and language comprehension constructs. Children in dyslexia and SCD groups performed at a similar level on the reading comprehension assessment, but both groups exhibited less pronounced reading comprehension difficulties than children in the mixed difficulty group, who displayed the poorest scores on both decoding and language comprehension variables. Despite performing at a similar level on the reading comprehension assessment, dyslexia and SCD groups exhibited different cognitive profiles. These findings emphasize the importance of assessing children's performance on the skills that underpin reading comprehension in addition to their reading comprehension ability.
Supporting information
Appendix S1 Supporting Information
ACKNOWLEDGMENT
Open access publishing facilitated by University of Canterbury, as part of the Wiley ‐ University of Canterbury agreement via the Council of Australian University Librarians.
Sleeman, M. , Everatt, J. , Arrow, A. , & Denston, A. (2022). The identification and classification of struggling readers based on the simple view of reading. Dyslexia, 28(3), 256–275. 10.1002/dys.1719
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
REFERENCES
- Aaron, P. G. , Joshi, M. , & Williams, K. A. (1999). Not all reading disabilities are alike. Journal of Learning Disabilities, 32(2), 120–137. 10.1177/002221949903200203 [DOI] [PubMed] [Google Scholar]
- Adlof, S. , Catts, H. , & Little, T. (2006). Should the simple view of reading include a fluency component? Reading and Writing, 19(9), 933–958. 10.1007/s11145-006-9024-z [DOI] [Google Scholar]
- Adlof, S. M. , & Hogan, T. P. (2019). If we don't look, we won't see: Measuring language development to inform literacy instruction. Policy Insights from the Behavioral and Brain Sciences, 6(2), 210–217. 10.1177/2372732219839075 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Araújo, S. , & Faísca, L. (2019). A meta‐analytic review of naming‐speed deficits in developmental dyslexia. Scientific Studies of Reading, 23(5), 1–20. 10.1080/10888438.2019.1572758 30718941 [DOI] [Google Scholar]
- Araújo, S. , Reis, A. , Petersson, K. M. , & Faísca, L. (2015). Rapid automatized naming and reading performance: A meta‐analysis. Journal of Educational Psychology, 107(3), 868–883. 10.1037/edu0000006 [DOI] [Google Scholar]
- Arrow, A. , & Tunmer, W. (2012). Contemporary reading acquisition. The conceptual basis for differentiated reading instruction. In Suggate S. & Reese E. (Eds.), Contemporary debates in childhood education and development (pp. 241–249). London, England: Routledge. 10.4324/9780203115558-38 [DOI] [Google Scholar]
- Astle, D. E. , Bathelt, J. , Holmes, J. , & The, C. T. (2019). Remapping the cognitive and neural profiles of children who struggle at school. Developmental Science, 22(1), e12747. 10.1111/desc.12747 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Auckland UniServices Limited . (2009). e‐asTTle reading. Auckland, New Zealand: Ministry of Education. [Google Scholar]
- Bacher, J. , Wenzig, K. , & Vogler, M. (2004). SPSS TwoStep Cluster – a first evaluation. Erlangen, Germany: University of Erlangen‐Nuremberg. [Google Scholar]
- Beck, I. , McKeown, M. , & Kucan, L. (2013). Bringing words to life (2nd ed.). New York, NY: Guilford Press. [Google Scholar]
- Bishop, D. V. M. , & Snowling, M. J. (2004). Developmental dyslexia and specific language impairment: Same or different? Psychological Bulletin, 130(6), 858–886. 10.1037/0033-2,909.130.6.858 [DOI] [PubMed] [Google Scholar]
- Bowyer‐Crane, C. , Snowling, M. J. , Duff, F. J. , Fieldsend, E. , Carroll, J. M. , Miles, J. , … Hulme, C. (2008). Improving early language and literacy skills: Differential effects of an oral language versus a phonology with reading intervention. Journal of Child Psychology and Psychiatry, 49(4), 422–432. 10.1111/j.1469-7610.2007.01849.x [DOI] [PubMed] [Google Scholar]
- Bradshaw, A. R. , Woodhead, Z. V. J. , Thompson, P. A. , & Bishop, D. V. M. (2021). Profile of language abilities in a sample of adults with developmental disorders. Dyslexia, 27(1), 3–28. 10.1002/dys.1672 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cain, K. , & Oakhill, J. (2007). Children's comprehension problems in oral and written language: A cognitive perspective. New York, NY: Guilford Press. [Google Scholar]
- Castles, A. , Rastle, K. , & Nation, K. (2018). Ending the reading wars: Reading acquisition from novice to expert. Psychological Science in the Public Interest, 19(1), 5–51. 10.1177/1529100618772271 [DOI] [PubMed] [Google Scholar]
- Catts, H. (2018). The simple view of reading: Advancements and false impressions. Remedial and Special Education, 39(5), 317–323. 10.1177/0741932518767563 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Catts, H. , Hogan, T. , & Adlof, S. (2005). Developmental changes in reading and reading disabilities. In Catts H. & Kamhi A. (Eds.), The connections between language and reading disabilities (pp. 25–40). London, England: Lawrence Erlbaum Associates. [Google Scholar]
- Catts, H. , Hogan, T. , & Fey, M. (2003). Subgrouping poor readers on the basis of individual differences in reading‐related abilities. Journal of Learning Disabilities, 36(2), 151–164. 10.1177/002221940303600208 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen, R. , & Vellutino, F. (1997). Prediction of reading ability: A cross‐validation study of the simple view of reading. Journal of Literacy Research, 29(1), 1–24. 10.1080/10862969709547947 [DOI] [Google Scholar]
- Chiu, Y. D. , & Consortium, L. R. R. (2018). The simple view of reading across development: Prediction of Grade 3 reading comprehension from prekindergarten skills. Remedial and Special Education, 39(5), 289–303. 10.1177/0741932518762055 [DOI] [Google Scholar]
- Chiu, T. , Fang, D. , Chen, J. , Wang, Y. , & Jeris, C. (2001). A robust and scalable clustering algorithm for mixed type attributes in large database environment. Proceedings of the seventh ACM SIGKDD international conference on knowledge discovery and data mining (pp. 263–268). 10.1145/502549 [DOI] [Google Scholar]
- Clarke, P. J. , Snowling, M. J. , Truelove, E. , & Hulme, C. (2010). Ameliorating children's reading‐comprehension difficulties: A randomized controlled trial. Psychological Science, 21(8), 1106–1116. 10.1177/0956797610375449 [DOI] [PubMed] [Google Scholar]
- Darr, C. , Ferral, H. , Twist, J. , & Watson, V. (2008). PAT (Progressive Achievement Test) – Reading comprehension: Revised 2008. Auckland, New Zealand: NZCER. [Google Scholar]
- Diamanti, V. , Goulandris, N. , Campbell, R. , & Protopapas, A. (2018). Dyslexia profiles across orthographies differing in transparency: An evaluation of theoretical predictions contrasting English and Greek. Scientific Studies of Reading, 22(1), 55–69. 10.1080/10888438.2017.1338291 [DOI] [Google Scholar]
- Dockrell, J. E., & Marshall, C. R. (2015). Measurement issues: Assessing language skills in young children. Child and Adolescent Mental Health, 20(2), 116–125. 10.1111/camh.12072 [DOI] [PubMed] [Google Scholar]
- Dunn, L. , Dunn, D. , Sewell, J. , Styles, B. , Brzyska, B. , Shamsan, Y. , & Burge, B. (2009). The British Picture Vocabulary Scale Third Edition (3rd ed.) London, England: GL Education. [Google Scholar]
- Ebert, K. D. , & Scott, C. M. (2016). Bringing the simple view of reading to the clinic: Relationships between oral and written language skills in a clinical sample. Journal of Communication Disorders, 62, 147–160. 10.1016/j.jcomdis.2016.07.002 [DOI] [PubMed] [Google Scholar]
- Ehri, L. C. (2014). Orthographic mapping in the acquisition of sight word reading, spelling memory, and vocabulary learning. Scientific Studies of Reading, 18(1), 5–21. 10.1080/10888438.2013.819356 [DOI] [Google Scholar]
- Foorman, B. R. , Koon, S. , Petscher, Y. , Mitchell, A. , & Truckenmiller, A. (2015). Examining general and specific factors in the dimensionality of oral language and reading in fourth–10th grades. Journal of Educational Psychology, 107(3), 884–899. 10.1037/edu0000026 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Frith, U. , Wimmer, H. , & Landerl, K. (1998). Differences in phonological recoding in German‐ and English‐speaking children. Scientific Studies of Reading, 2(1), 31–54. 10.1207/s1532799xssr0201_2 [DOI] [Google Scholar]
- Georgiou, G. K. , & Das, J. P. (2018). Direct and indirect effects of executive function on reading comprehension in young adults. Journal of Research in Reading, 41(2), 243–258. 10.1111/1467-9,817.12091 [DOI] [Google Scholar]
- Georgiou, G. K. , Das, J. P. , & Hayward, D. (2009). Revisiting the “simple view of reading” in a group of children with poor reading comprehension. Journal of Learning Disabilities, 42(1), 76–84. 10.1177/0022219408326210 [DOI] [PubMed] [Google Scholar]
- Gilmore, A. , Croft, C. , & Reid, N. (1981). Burt Word Recognition Test: New Zealand revision. Wellington, New Zealand: New Zealand Council for Educational Research. [Google Scholar]
- Gough, P. , & Tunmer, W. (1986). Decoding, reading, and reading disability. Remedial and Special Education, 7(1), 6–10. 10.1177/074193258600700104 [DOI] [Google Scholar]
- Handler, S. M. , & Fierson, W. M. (2011). Learning disabilities, dyslexia and vision. Pediatrics, 127(3), e818–e856. 10.1542/peds.2010-3,670 [DOI] [PubMed] [Google Scholar]
- Hoover, W. , & Gough, P. (1990). The simple view of reading. Reading and Writing, 2(2), 127–160. 10.1007/BF00401799 [DOI] [Google Scholar]
- Kelso, K. , Fletcher, J. , & Lee, P. (2007). Reading comprehension in children with specific language impairment: An examination of two subgroups. International Journal of Language & Communication Disorders, 42(1), 39–57. 10.1080/13682820600693013 [DOI] [PubMed] [Google Scholar]
- Kendeou, P. , van den Broek, P. , Helder, A. , & Karlsson, J. (2014). A cognitive view of reading comprehension: Implications for reading difficulties. Learning Disabilities Research and Practice, 29(1), 10–16. 10.1111/ldrp.12025 [DOI] [Google Scholar]
- Language and Reading Research Consortium . (2015). Learning to read: Should we keep things simple? Reading Research Quarterly, 50(2), 151–169. 10.1002/rrq.99 [DOI] [Google Scholar]
- Lauterbach, A. A. , Lauterbach, A. A. , Park, Y. , Park, Y. , Lombardino, L. J. , & Lombardino, L. J. (2017). The roles of cognitive and language abilities in predicting decoding and reading comprehension: Comparisons of dyslexia and specific language impairment. Annals of Dyslexia, 67(3), 201–218. 10.1007/s11881-016-0139-x [DOI] [PubMed] [Google Scholar]
- Leonard, L. B. (2014). Children with specific language impairment (2nd ed.). Cambridge, MA: The MIT Press. [Google Scholar]
- Lonigan, C. J. , Burgess, S. R. , & Schatschneider, C. (2018). Examining the simple view of reading with elementary school children: Still simple after all these years. Remedial and Special Education, 39(5), 260–273. 10.1177/0741932518764833 [DOI] [Google Scholar]
- Mandelaine, A. , & Wheldall, K. (1998). Towards a curriculum‐based passage reading test for monitoring the performance of low‐progress readers using standardised passages: A validity study. Educational Psychology, 18(4), 471–498. 10.1080/0144341980180408 [DOI] [Google Scholar]
- Morris, D. , Meyer, C. , Trathen, W. , McGee, J. , Vines, N. , Stewart, T. , … Schlagal, R. (2017). The simple view, instructional level, and the plight of struggling fifth−/sixth‐grade readers. Reading & Writing Quarterly, 33(3), 278–212. 10.1080/10573569.2016.1203272 [DOI] [Google Scholar]
- New Zealand Qualifications Authority . (n.d.). SAC information for schools . https://www.nzqa.govt.nz/providers-partners/assessment-and-moderation-of-standards/managing-national-assessment-in-schools/special-assessment-conditions/sac-information-for-schools/evidence-needed/
- Savage, R. (2001). The ‘simple view’ of reading: Some evidence and possible implications. Educational Psychology in Practice, 17(1), 17–33 10.1080/02667360120039951. [DOI] [Google Scholar]
- Savage, R. (2006). Reading comprehension is not always the product of nonsense word decoding and linguistic comprehension: Evidence from teenagers who are extremely poor readers. Scientific Studies of Reading, 10(2), 143–164. 10.1207/s1532799xssr1002_2 [DOI] [Google Scholar]
- Savage, R. , Burgos, G. , Wood, E. , & Piquette, N. (2015). The simple view of reading as a framework for national literacy initiatives: A hierarchical model of pupil‐level and classroom‐level factors. British Educational Research Journal, 41(5), 820–844. 10.1002/berj.3177 [DOI] [Google Scholar]
- Schrank, F. , McGrew, K. , & Mather, N. (2014). Woodcock Johnson IV. Rolling Meadows, IL: Riverside. [Google Scholar]
- Thapar, A. , & Rutter, M. (2015). Neurodevelopmental disorders. In Rutter's child and adolescent psychiatry (pp. 31–40). Chichester, England: John Wiley & Sons. 10.1002/9781118381953.ch3 [DOI] [Google Scholar]
- Tunmer, W. , & Chapman, J. (2007). Language‐related differences between discrepancy‐defined and non‐discrepancy‐defined poor readers: A longitudinal study of dyslexia in New Zealand. Dyslexia, 13(1), 42–66. 10.1080/10349120120115307 [DOI] [PubMed] [Google Scholar]
- Tunmer, W. , & Hoover, W. (2019). The cognitive foundations of learning to read: A framework for preventing and remediating reading difficulties. Australian Journal of Learning Difficulties, 24(1), 75–93. 10.1080/19404158.2019.1614081 [DOI] [Google Scholar]
- Vellutino, F. , Tunmer, W. , Jaccard, J. , & Chen, R. (2007). Components of reading ability: Multivariate evidence for a convergent skills model of reading development. Scientific Studies of Reading, 11(1), 3–32. 10.1080/10888430709336632 [DOI] [Google Scholar]
- Wagner, R. , Torgesen, J. , Rashotte, C. , & Pearson, N. (2013). Comprehensive test of phonological processing: Examiners manual ((2nd edn). ed.). Austin, TX: PRO‐ED. [Google Scholar]
- Wren, S. (2001). The cognitive foundations of learning to read. Austin, TX: Southwest Educational Development Laboratory. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Appendix S1 Supporting Information
Data Availability Statement
The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.