Abstract
In a quasiregular orthography like English, children inevitably encounter irregular words during reading. Previous research suggests successful reading of an irregular word depends at least partially on a child’s ability to address the mismatch between decoded form and stored word pronunciation, referred to as a child’s set for variability, and the word’s relative transparency, measured here using a spelling to pronunciation transparency rating. Item-level analyses were used to explore the relationship between general child performance on the set for variability mispronunciation task, word specific set for variability (predicting reading of that word), spelling to pronunciation transparency rating, and irregular word reading. Significant predictors included general word reading, general set for variability performance, and item-specific set for variability performance; word frequency and spelling to pronunciation transparency rating; and interaction between word reading and the transparency rating. Results underscore the importance of considering both general and item-specific factors affecting irregular word reading.
The self-teaching hypothesis (Share, 1995) posits that children add words to their lexicons through item-specific learning rather than through developmental stages (see Nation & Castles, 2017). For self-teaching to be successful, a child must have phonemic awareness, letter sound knowledge, be able to decode, and have a representation of the target word in her oral vocabulary (see Elbro, de Jong, Houter, D., & Nielsen, 2012). In a quasiregular orthography like English (the relationship between orthography and phonology is systematic but admits many exceptions), however, there are inevitable encounters with words that can be only partially decoded by the application of decoding rules, resulting in a mismatch between the decoded form and word pronunciation. Set for variability (Gibson & Levin, 1975; Venezky, 1999) is seen as a process that “cleans up” this mismatch between orthography to phonology conversion and word pronunciation (Tunmer & Chapman, 2012). For example, a young reader may decode wasp to rhyme with clasp, however upon recognizing that /wæsp/ is not a real word, she must then flexibly apply different pronunciations for the letter a to arrive at the word, which is in her listening vocabulary. There is increasing evidence within the literature implicating the role of semantic and/or phonological cleanup in children’s reading of irregular words. For instance, Steacy et al. (2017) recently reported results suporting the role of lexical influence on irregular word reading with vocabulary skill having a direct effect and word imageability acting as a moderator. In addition, connectionist models (i.e., triangle model) of word recognition (Plaut et al., 1996) have shown that the addition of a semantic processor (represented as item-specific knowledge) to a model containing phonological and orthographic processors improves irregular word recognition.
Studies have found that set for variability predicts the reading of irregular words (Tunmer & Chapman, 2012), regular words (Elbro, et al., 2012), and nonwords (Steacy et al., 2019). In these studies, set for variability was considered a general child attribute assessing cognitive flexibility for semantic cleanup by presenting children with spoken regularized pronunciations of irregular words (e.g., /wæsp/) and asking them to identify the real word (i.e., /wasp/). Tunmer and Chapman found that set for variability assessed as a measure of child ability predicted English word reading and decoding in first graders concurrently and longitudinally three years later. Elbro and colleagues reported similar findings in both shallow (Dutch) and deep (Danish) orthographies. Set for variability is related to word reading for both timed and untimed measures (Dyson, Best, Solity, & Hulme, 2017; Kearns, Rogers, Koriakin, & Al Ghanem, 2016) and these effects are stronger for word reading than knowledge of word meaning (Dyson et al.). We model our set for variability assessment on these studies, but we consider set for variability as both a general child skill and a child-by-word1 predictor specific to each word. We speculate that set for variability is a process related to irregular word reading and item properties make items more or less conducive to the success of that process.
There is emerging evidence that this lexical flexibility can be trained in children (Dyson, Best, Solity, & Hulme, 2017; Savage, Georgiou, Parilla, & Maiorino, 2018; Zipke, 2016). Training protocols have emphasized flexibility in applying different pronunciations for letters or letter combinations (Zipke, 2016), checking for matches and making approximations to known words (Dyson et al., 2017; Savage et al., 2018), and a two-step instructional model where direct instruction in simple decoding was the first step and set for variability flexibility training followed as a second step (Savage et al., 2018).
In the current study we explored the relationship between set for variability (both as item-specific and general predictors) and irregular word reading by decomposing irregular word reading variance into child-, word-, and child-by-word (set for variability) components. We drew on the literature to select relevant characteristics of the child (e.g., set for variability, phonological awareness, rapid automatized naming, vocabulary), the word (e.g., frequency, number of letters, concreteness, relative transparency), and the child-by-word (e.g., item level set for variability performance) as predictors . We extend the literature by asking grades 2-5 children to read a subset of the set for variability items (i.e., the dependent measure of irregular word reading; see Appendix A) and used the set for variability mispronunciation task as an item-level predictor of irregular word reading, while simultaneously exploring the role of other important child- and word-level predictors. We included as a word-level predictor a measure assessing each word’s relative transparency, using a spelling to pronunciation transparency rating2. In doing so, we were able to examine the unique role of child-by-word set for variability (i.e., recognizing /wæsp/ represents /wɑsp/), general child-level set for variability performance (total performance on all items on the set for variability task), and general word-level transparency in predicting set for variability item reading (correctly reading wasp as /wasp/). This allowed us to estimate the separate roles of general and item-specific set for variability ability at the child level and spelling to pronunciation trasparency at the word level in explaining item level variance in irregular word reading.
Method
Participants
Participants were 103 children in grades 2-5 from private and public schools. Prior to the study, ethical approval was obtained from the Florida State University ethics committee, which conforms to the U.S. Federal Policy for the Protection of Human Subjects. Prior to participation, teacher and parent consent was sought and student assent was obtained. Demographic data for participants are presented in Table 1. Zero-order and age corrected correlations along with child-level descriptive statistics are provided in Table 2. We oversampled children who were struggling to learn to read words as represented by the depressed age-adjusted scaled and standardized scores for phonemic awareness, rapid naming, and word reading. Students in this sample were attending either specialized schools for children with learning differences (including dyslexia) or were attending Title I schools, schools that were receiving additional federal funding to support a large population of students from low socioeconomic backgrounds. All data collection took place in the United States. Although over sampled for poor reading skills, the sample had normal age-adjusted scaled scores in vocabulary. Zero-order word-level correlations and descriptive statistics representing length, frequency, and spelling-to-pronunciation transparency rating are also provided in Table 2.
Table 1.
Full Sample N=103 |
||||
---|---|---|---|---|
Variable | n | % | Mean | (SD) |
Age (years) | 9.99 | (1.20) | ||
Gender | ||||
Female | 44 | 42.72 | ||
Male | 59 | 57.28 | ||
Race | ||||
African American | 14 | 13.59 | ||
Hispanic | 12 | 11.65 | ||
Caucasian | 75 | 72.82 | ||
Multiracial | 2 | 1.94 |
Table 2.
Child Variables | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
---|---|---|---|---|---|---|---|
1. Phonemic Awareness | – | −.24 | .36 | .61 | .41 | .27 | – |
2. Rapid Letter Naming | −.30 | – | −.08 | −.18 | −.45 | −.57 | – |
3. WASI Vocabulary | .42 | −.17 | – | .33 | .17 | .04 | – |
4. Set for variability – Listening | .64 | −.28 | .40 | – | .63 | .40 | – |
5. Set for variability – Reading | .47 | −.52 | .28 | .69 | – | .76 | – |
6. Sight Word Efficiency | .32 | −.61 | .11 | .45 | .77 | – | – |
7. Age | .26 | −.32 | .30 | .37 | .43 | .26 | – |
Raw Score Mean | 22.34 | 21.31 | 25.75 | 25.63 | 23.28 | 52.76 | – |
Raw Score SD | 6.43 | 6.38 | 6.57 | 11.03 | 9.69 | 15.20 | – |
Standard/Scaled Score | 8.11 | 8.18 | 10.07 | – | – | 83.26 | – |
Standard/Scaled Score SD | 2.78 | 2.40 | 3.31 | – | – | 14.07 | – |
Word Variables | 1 | 2 | 3 | 4 | |||
1. Length | – | ||||||
2. Frequency | −.22 | – | |||||
3. SPTR | .19 | −.17 | – | ||||
4. Concreteness | .27 | −.12 | −.23 | – | |||
Mean | 5.35 | 53.19 | 3.21 | 4.06 | |||
SD | 1.23 | 6.52 | .83 | 1.01 |
Note. SPTR=Spelling to Pronunciation Transparency Rating; scaled scores: , SD = 3; standard scores: ; SD = 15; zero-order correlations are below the diagonal, age-corrected correlations are above the diagonal.
Procedures
Testing occurred in the fall or spring of grades 2-5, with trained research assistants administering all tests. All research assistants received extensive training and practice and were required to achieve 80% procedural fidelity before testing participants. All testing sessions were audio recorded for scoring and reliability purposes. All tests were double scored and double entered by a fellow research assistant, and discrepancies resolved by the project coordinator.
Child measures
Phonemic awareness (PA).
The phonemic awareness task was the Elision task from the Comprehensive Test of Phonological Processing (CTOPP; Wagner, Torgesen, Rashotte, & Pearson, 2013). Students were asked to delete phonological units from words. The authors report test-retest reliability of .93.
Rapid automatized naming (RAN).
To test for rapid automatized naming, we used the letter naming task from the CTOPP (Wagner, Torgesen, Rashotte, & Pearson, 2013). Students were asked to name a series of letters as fast as they could without making mistakes. The authors report a test-retest reliability of .72 for children ages 8-17 years.
Set for variability - Listening mispronunciation task.
Based on the work of Tunmer & Chapman (1998; 2012), set for variability was assessed by participants’ ability to determine the correct pronunciation from spoken words that were “mispronounced” based on common decoding rules, as they might be if they were regularized or partially decoded (e.g., /brikfəst/ for /brɛkfəst/). The coefficient alpha for our sample was .91.
Vocabulary.
The vocabulary subtest from the WASI (Weschler, 2011) was used to measure expressive vocabulary. The test required students to identify pictures and define words. Interrater reliability ranges from .92-.94 (McCrimmon & Smith, 2013).
Sight word reading efficiency (SWE).
The word reading task in this study was the SWE task from the Test of Word Reading Efficiency (Torgesen, Wagner, & Rashotte, 2012). Students were asked to read a list of words in order of difficulty for 45 seconds. The authors report an alternate forms reliability of .91.
Set for variability – Reading (dependent measure).
Students were asked to read 40 of the words from the set for variability mispronunciation task (see Appendix A). This was an untimed, experimenter created task that served as the item-level dependent measure. The coefficient alpha for this task in our sample was .94.
Word measures
Word length.
The number of letters in each word.
Frequency.
We used the standard frequency index (SFI) from the Educator’s Word Frequency Guide (Zeno, Millard, Ivens, & Duuvuri, 1995). SFI represents a logarithmic transformation of the frequency of word type per million tokens within a corpus of over 60, 000 samples of texts. The range of SFI within the corpus is 3.5 to 88.3.
Spelling-to-pronunciation transparency rating.
To address how easy it was to arrive at the correct pronunciation of each irregular word by applying typical decoding rules, we asked expert raters to rate this difficulty on a six-point scale. This measure has been used in one previous study (Steacy et al. 2017). Our expert raters (N=25) were professors and graduate students with a firm background in phonics and decoding. Experts were given the following prompt: “… pretend that the letter string is unfamiliar to you and apply a traditional grapheme-phoneme decoding strategy to the letter string and rate the ease of matching your recoded form of the letter string to the actual word pronunciation.” Cronbach’s alpha was .82.
Concreteness.
Concreteness was coded using ratings from Brysbaert, Warriner, and Kuperman (2014) for 40,000 generally known English words. People were asked to rate the concreteness of words on a scale of 1 (abstract) to 5 (concrete).
Data analysis
Item-response based crossed random effects models were used to address the role of child-, word-, and child-by-word-level predictors of irregular word reading variance. These models allow us to include both child-, word-, and item-specific child-by-word level predictors in the same model as well as address interactions between child and word predictors. These cross-classification multilevel models were used to predict children’s reading of the specific word (e.g., wasp) coded as a dichotomous response (correct or incorrect) using child (set for variability, phonological awareness, rapid automatized naming, vocabulary), word (frequency, number of letters, concreteness, relative transparency), and child-by-word (i.e., item-specific performance on the set for variability mispronunciation task – correctly identifying wasp from /wæsp/) as predictors. We include SWE as a predictor in the models to control for general word reading skill. Given that our sample is in part from schools for students with dyslexia, we felt it important to control for word reading skill rather than age or grade. To ensure that this decision did not impact our results or interpretation, we ran models with and without SWE. We found that without SWE in the model, only RAN becomes significant (γ= −.09, z=.02, p<.001) and other results were the same. In addition, we ran the models with age in the model without SWE. We found that age was not a significant predictor (γ= .17, z=1.53, p=.12). We conducted these analyses using Laplace approximation available through the lmer function (Bates & Maechler, 2009) from the lme4 library in R (R Development Team, 2012). Random intercepts were included for child and word. Fixed effects were included for all child, word, and child-by-word predictors. We estimated the variability explained by calculating the reduction in child and word variance from the base model using the formula (r010 (Base model) − r010(Model n))/r010(Base model), where n represents the model to which the base model was compared (Bryk & Raudenbush, 1992).
Results
To decompose irregular word reading variance, we ran a series of crossed random effects models. These models are presented in Table 3. First, we ran an unconditional model to determine how much variance was associated with the word- and child-level. These variance estimates were used to determine how much variance was explained by subsequent models. The intercept of the unconditional model indicated that the average probability of a correct response across words and children on the reading task was .52. This figure is based on a conversion from the logit intercept of .61 to a probability, based on equation 1:
(1) |
Where pji represents the probability of a correct response from person j on item i and is the intercept.
Table 3.
Unconditional Model | Main Effects | Interaction Model | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Fixed Effects Parameter | Est. | (SE) | z | p | Est. | (SE) | z | p | Est. | (SE) | z | p |
Intercept (γ000) | .610 | (.349) | 1.751 | .080 | .302 | (.185) | 1.635 | .121 | .252 | (.197) | 1.280 | .200 |
Item covariate | ||||||||||||
γ001 Set for variability | — | — | — | — | 1.035 | (.118) | 8.766 | <.001 | 1.031 | (.119) | 8.688 | <.001 |
Child covariates | ||||||||||||
γ002 Phonological Awareness | — | — | — | — | −.002 | (.020) | .077 | .939 | −.002 | (.021) | .098 | .922 |
γ003 Rapid Letter Naming | — | — | — | — | −.017 | (.019) | .863 | .388 | −.018 | (.020) | .896 | .370 |
γ004 WASI Vocabulary | — | — | — | — | .009 | (.017) | .557 | .577 | .010 | (.017) | .574 | .566 |
γ005 Sight Word Efficiency | — | — | — | — | .068 | (.009) | 7.516 | <.001 | .069 | (.009) | 7.514 | <.001 |
γ006 Set for variability (Total) | — | — | — | — | .065 | (.013) | 5.035 | <.001 | .065 | (.013) | 4.920 | <.001 |
Word covariates | ||||||||||||
γ007 SPTR | — | — | — | — | −.887 | (.227) | 3.907 | <.001 | −.873 | (.229) | 3.815 | <.001 |
γ008 Length | — | — | — | — | .315 | (.154) | .875 | .382 | .149 | (.155) | .958 | .338 |
γ009 Frequency | — | — | — | — | .161 | (.028) | 5.743 | <.001 | .161 | (.028) | 5.697 | <.001 |
γ010 Concreteness | — | — | — | — | .362 | (.189) | 1.916 | .055 | .365 | (.191) | 1.912 | .056 |
Interactions | ||||||||||||
γ011 SWE × SPTR | — | — | — | — | — | — | — | — | .016 | (.004) | 3.520 | <.001 |
Variance | Variance | Variance Explained | ||||||||||
Intercepts | ||||||||||||
Word | 3.389 | 1.091 | 67.81% | |||||||||
Child | 3.579 | .746 | 79.16% |
Note. SWE= Sight Word Efficiency; SPTR=Spelling to Pronunciation Transparency Rating
In the main effects model, the intercept indicates the predicted probability of reading the target word correctly (e.g., wasp) for a child who did not successfully perform the set for variability task on that item (hearing from /wæsp/ and rcognizing it as wasp) with average scores on all child variables and reading a word that is average across the word characteristics. This probability was .57. To understand if a student’s ability to do the set for variability task on a particular word predicts their ability to read that particular word, we included an item-specific predictor for set for variability (γ001). This was a significant predictor (γ001= 1.035, z=8.766, p<.001). A student who correctly completed the set for variability item had a probability of .79 of reading the word correctly (controlling for all other word and child factors). Thus, students had a .22 higher probability of reading the word correctly than if they were unsuccessful at the set for variability task on that word.
We were then interested in understanding child and word characteristics associated with successfully reading the words on the set for variability reading list. We found a significant main effect for child SWE (γ005= .068, z=7.516) and set for variability total score (γ006= .065, z=5.035). These effects indicate that if a student was 1 SD above the mean on SWE, they had a probability of .79 of reading the word correctly. If they had a set for variability total score 1SD above the mean, they had a .73 probability of reading the word correctly. The main effects model accounted for 79.16% of child variance in irregular word reading.
At the word level, we found significant main effects for frequency (γ009= .161, z=5.743) and spelling-to-pronunciation transparency rating (γ007= −.887, z=3.907). These main effects indicate that words that were higher frequency had a higher probability of being read correctly than low frequency words. Furthermore, words that were rated as more difficult to go from the decoded form to the correct pronunciation had a lower probability of being read correctly than words that people rated as easier. If the raters rated a word 1SD above the mean of difficulty, students had a probability of .37 of reading the word correctly whereas if a word was rated as 1SD below the mean on difficulty, students had a probability of .76 of reading the word correctly. The negative coefficient indicates that as difficulty increased, accuracy decreased. The main effects model accounted for 64.56% of the word variance.
To examine the relationship between child and word variables, we ran an exploratory interaction model to investigate the relationship between overall child word reading skill (SWE) and spelling-to-pronunciation transparency rating in cases of accurate and inaccurate item-level set for variability performance. We found a significant interaction between SWE and spelling-to-pronunciation transparency rating. The transparency rating had a greater impact on performance for readers on the lower end of the SWE distribution (1SD below the mean) than readers on upper end of the distribution (1SD above the mean). The relationship held whether the set for variability item response was accurate or inaccurate with the main differences being the mean probability of correct response on irregular word reading (lower in the case of an inaccurate set for variability response) and the slope of the relationship between spelling-to-pronunciation transparency rating and response probability (steeper in the case of an inaccurate set for variability response). It is possible that this interaction is due to the fact that poor readers are still grappling with the decoded form of the words. This difficulty could be due to producing less accurate decoding forms of the word, more difficulty in matching the decoded form to the word pronunciation, less general knowledge of word pronunciations (i.e., poor semantic knowledge), or a combination of the three. More skilled readers, on the other hand, have better decoding skills, less difficulty in matching the decoded form to the word pronunciation, and perhaps more well-formed representations of the words in their lexicons.
Discussion
The purpose of this study was to explore to role of set for variability in irregular word reading. There has been increasing interest in set for variability as a proxy for students’ flexibility with the quasiregular English writing system. In this study, we captured this flexibility from the perspective of both the word and the child. We included a child-by-word predictor of item-specific set for variability performance. At the child level, we included set for variability as a general child skill. At the word level, we created a proxy for set for variability with a spelling to pronunciation transparency rating. We included all three predictors in the models to create a multifaceted model that captures this flexibility from the perspective of the specific item, general child skill, and word characteristics.
The results from our study provide further support for set for variability as a predictor of irregular word reading. Our models suggest that both item-specific set for variability (the ability to do the set for variability task on a specific word) and general child level performance on the task are strong predictors of item-specific word reading (the ability to read the same word). Our results also suggest that set for variability is a strong predictor over and above both PA and general word reading, suggesting that good PA may be necessary but not sufficient for reading irregular words. More specifically, having good phonological skills can support general decoding but may not lead to the right phonemic realization for accurate word reading, particularly for irregular words. This is supported by the fact that phonological awareness is not a significant predictor when set for variability is included in the model at both the item and child level. Similarly, given that set for variability remains a significant predictor even when general word reading skill is included in the model, having strong word reading skills may not be enough to successfully read these irregular words. In addition, the word equivalent of the set for variability tasks measured via the spelling-to-pronunciation transparency rating was a significant predictor of item-level irregular word reading variance illustrating the importance of specific item, child, and word characteristics.
We speculate here that the ability to complete the set for variability mispronunciation task is both an item-specific skill for item-based word reading and an important metalinguistic skill related to students’ ability to successfully arrive at the correct pronunciation of a word. There appear to be individual differences in this metalinguistic skill and/or students’ willingness to engage in this process. Students’ successful reading of irregular words was dependent on their general word reading skill, set for variability skill, and how difficult it is to go from the regularized decoding of the word to the actual pronunciation. These results indicate that accurate reading is dependent on both child skill and the demands presented by the specific word. We did not find a significant effect of general vocabulary knowledge, possibly because vocabulary was not tested for the specific words read in this study.
As we move forward in understanding the relationship between the set for variability mispronunciation task and reading skill, some questions remain unanswered. First, an item-specific vocabulary knowledge and/or a familiarity rating of words would be warranted. Second, an exploration of the impact of word consistency on word reading in relation to set for variability appears important. Finally, further exploration of how specific underlying factors of the set for variability task are related to irregular word reading will help us to develop a deeper understanding of the skill and how it can impact instructional practices.
Acknowledgement
This research was supported in part by Grant P20HD091013 from NICHD. The content is solely the responsibility of the authors and does not necessarily represent the official view of NICHD.
Appendix A
Table 1A.
Target Word |
---|
treasure |
spinach |
deaf |
kind |
island |
piano |
prove |
lizard |
veins |
mystery |
measles |
ache |
deny |
pudding |
stomach |
chorus |
body |
pigeon |
rhythm |
money |
break |
onion |
whom |
weather |
ninth |
chemist |
iron |
camel |
scent |
metal |
blind |
lamb |
soup |
devil |
rely |
tongue |
scissors |
river |
post |
wasp |
Footnotes
We use the term child-by-word to refer to a predictor that is specific to the word in the dependent measure (i.e., recognizing /wæsp/ represents /wasp/ on the Set for variability task predicting reading wasp as /wasp/. Often referred to in the literature as an “item-specific” predictor.
We consider this measure to be the word-level counterpart to the child level set for variability measure.
The authors declare no conflict of interest.
Contributor Information
Laura M. Steacy, Florida Center for Reading Research, Florida State University
Lesly Wade-Woolley, University of South Carolina.
Jay G. Rueckl, University of Connecticut & Haskins Laboratories
Ken Pugh, Haskins Laboratories.
James D. Elliott, Florida Center for Reading Research, Florida State University
Donald L. Compton, Florida Center for Reading Research, Florida State University
References
- Brysbaert M, Warriner AB, & Kuperman V (2014). Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods, 46, 904–911. [DOI] [PubMed] [Google Scholar]
- Dyson H, Best W, Solity J, & Hulme C (2017). Training mispronunciation correction and word meanings improves children’s ability to learn to read words. Scientific Studies of Reading, 1–16. [Google Scholar]
- Elbro C, de Jong PF, Houter D, & Nielsen A (2012). From spelling pronunciation to lexical access: A second step in word decoding? Scientific Studies of Reading, 16(4), 341–359. [Google Scholar]
- Gibson EJ, & Levin H (1975). The psychology of reading. The MIT press. [Google Scholar]
- Kearns DM, Rogers HJ, Al Ghanem R, & Koriakin T (in press). Semantic and phonological ability to adjust recoding: A unique correlate of word reading skill? Scientific Studies of Reading. [Google Scholar]
- McCrimmon AW, & Smith AD (2013). Review of Wechsler abbreviated scale of intelligence, second edition (WASI-II). Journal of Psychoeducational Assessment, 31(3), 337–341. [Google Scholar]
- Nation K, & Castles A (2017). Putting the learning into orthographic learning. Theories of reading development, 148–168. [Google Scholar]
- Plaut DC, McClelland JL, Seidenberg MS, & Patterson K (1996). Understanding normal and impaired word reading: Computational principles in quasi-regular domains. Psychological Review, 103(1), 56–115. [DOI] [PubMed] [Google Scholar]
- Savage R, Georgiou G, Parrila R, & Maiorino K (2018). Preventative Reading Interventions Teaching Direct Mapping of Graphemes in Texts and Set-for-Variability Aid At-Risk Learners. Scientific Studies of Reading, 22(3), 225–247. [Google Scholar]
- Share DL (1995). Phonological recoding and self-teaching: Sine qua non of reading acquisition. Cognition, 55, 151–218. [DOI] [PubMed] [Google Scholar]
- Steacy LM, Compton DL, Petscher Y, Elliott JD, Smith K, Rueckl JG, … Pugh KR. (2019). Development and prediction of context-dependent vowel pronunciation in elementary readers. Scientific Studies of Reading, 23(1), 49–63. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Steacy LM, Kearns DN, Gilbert JK, Compton DL, Cho E, Lindstrom ER, & Collins AA (2017). Exploring individual differences in irregular word recognition among children with early-emerging and late-emerging word reading difficulty. Journal of Educational Psychology, 109, 51–69. [Google Scholar]
- Torgesen JK, Wagner R, & Rashotte C (2012). Test of Word Reading Efficiency 2. Austin, TX: Pro-Ed. [Google Scholar]
- Tunmer WE, & Chapman JW (1998). Language prediction skill, phonological recoding ability and beginning reading In Hulme C & Joshi RM (Eds.), Reading and spelling: Development and disorder (pp. 33–67). Hillsdale, NJ: Erlbaum. [Google Scholar]
- Tunmer WE, & Chapman JW (2012). Does set for variability mediate the influence of vocabulary knowledge on the development of word recognition skills? Scientific Studies of Reading, 16(2), 122–140. [Google Scholar]
- Wagner RK, Torgesen JK, Rashotte C, & Pearson NA (2013). Comprehensive Test of Phonological Processing 2. Austin, TX: Pro-Ed. [Google Scholar]
- Wechsler D (2011). Wechsler Abbreviated Scale of Intelligence–Second Edition (WASI-II). San Antonio, TX: NCS Pearson. [Google Scholar]
- Venezky RL (1999). The American way of spelling: The structure and origins of American English Orthography. New York, NY: Guilford Press. [Google Scholar]
- Zeno SM, Ivens SH, Millard RT, & Duvvuri R (1995). The educator’s word frequency guide (CD-Rom). New York: Touchstone Applied Science Associates. [Google Scholar]
- Zipke M (2016). The importance of flexibility of pronunciation in learning to decode: A training study in set for variability. First Language, 36(1), 71–86. [Google Scholar]