It is important to remember that what members of the National Early Literacy Panel (NELP) and these commentators have in common is the goal of improving our understanding of children’s early development related to literacy and advancing the field of early childhood education so that children gain maximum benefit of their education. We were struck by the either-or approach of many of these commentaries, as the report does not present a choice between code-focused and meaning-focused instruction, and we do not view early childhood education in this either-or fashion. Nevertheless, we applaud the time and effort these authors have expended stating their views and raising questions for the field, and we are heartened that many of the issues raised by them are the same as those explicitly stated in the Report. We have divided our comments into two responses: this one that addresses the critics’ conceptual concerns, and one on the methodological and statistical issues.
We said too much. We haven’t said enough
Several critiques either took exception to those things they believe that we suggested (and shouldn’t have) or to what they think we should have suggested (but didn’t). Perhaps these contradictory concerns indicate that we struck an appropriate balance. Teale et al. and Dickinson et al. are concerned that the Report will narrow the instruction of early childhood educators. The Report is a synthesis of research and not a practitioner’s guide on how to teach early literacy, though we believe that it does have value for informing practice.
We were disappointed that Pearson and Hiebert’s comments failed to reflect the distinctions between early childhood education environments (e.g., kindergarten, family child care, center-based child care). Their failure to acknowledge these variations prevents them, perhaps, from recognizing how the Report advances the field. Unlike the National Reading Panel report (NRP), our focus was on children kindergarten age or younger. Although these reports relied on some of the same studies on code-based teaching (about 75% of the studies were unique to either NELP or NRP), our focus was on understanding the impacts of interventions specifically with younger children.
Pearson and Hiebert were concerned that we did not examine the results of large-scale federal studies, such as the Reading First or Early Reading First evaluations. We did not do so because these reports were not peer-reviewed and thus fell outside the scope of our review parameters, were not yet available when we wrote the Report, did not include preschool or Kindergarten outcomes (Reading First), or evaluated the impact of a funding stream. Hiebert and Pearson are correct that there are ways that such studies could have been included in a meta-analysis. However, the ECLS-K study (Denton & West, 2002) neither evaluated the effectiveness of any intervention nor included correlations of early skills with later achievement (its multivariate results were consistent with the NELP findings).
One consideration in determining the scope of the Report was to avoid the controversies surrounding the NRP conclusions. Our aim was solely to present a rigorous synthesis of research findings. The Report was intended as a first step in a process to develop practice recommendations; however, it was never intended to be such a practice document. Some critics decried the interpretations of NELP findings in publications like Early Beginnings, but the panelists neither wrote nor approved that publication. So, we have no response to that. However, such publications highlight legitimate differences in interpretation of research results, and rather than complaining that we failed to impose our interpretations, we are amazed that these commentaries failed to articulate their own interpretations (an exception is the Shickedanz and McGee entry that focuses on how the NELP findings can be used to better help children). Although we agree that the NELP Report could be misinterpreted; that is not a flaw of the Report. We hope that attempts to use the Report to advance practice would attend to the entirety of the Report, and these uses would neither base recommendations on selected findings nor overextend the evidence.
Language development and the development of skilled reading
Several critics (Dickinson et al.; Neuman; Teale et al.) express concern that the Report did not sufficiently highlight the role of oral language in later literacy. We believe that we reviewed the language evidence thoroughly, but we were constrained by the available evidence from making the claims that these critics would have preferred. The panel did not neglect language: we included more than 90 correlational studies on its relationship with literacy (including data from nearly 10,000 children). In each instructional chapter, language outcomes were reported and discussed, and an entire chapter was devoted to studies of interventions aimed at teaching language. We further analyzed these studies, finding that some oral language measures are more closely related to reading than others, and that language measures differed in relationship with comprehension and decoding. These findings are consistent with longitudinal studies (e.g., Sénéchal & LeFevre, 2002; Storch & Whitehurst, 2002) and theoretical frameworks (e.g., Whitehurst & Lonigan, 1998). These differences were not small: some measures of oral language consistently explained less than 10% of the variance in later reading, whereas other measures explained about 50% of this variance.
Our conclusions concerning the importance of language were:
Such results are potentially instructive about the focus of early childhood education. They suggest that a focus on building vocabulary alone is unlikely to be sufficient for improving outcomes not only in literacy but also in oral language itself. Although, these results should not be taken to imply that well-developed vocabularies are unimportant for literacy. The results suggest that well-developed vocabularies are insufficient for literacy. More complex oral-language skills are dependent on vocabulary. For instance, a child with strong grammatical knowledge but limited vocabulary would have a difficult time understanding a text or writing a meaningful narrative. Vocabulary provides the foundation for grammatical knowledge, definitional vocabulary, and listening comprehension. (Lonigan, Schatschneider, & Westberg, 2008, p. 75; emphasis in original).
And
The results suggest a need for more careful study of the role of oral language in literacy development…. These results suggest that an instructional focus on vocabulary during the preschool and kindergarten years is likely a necessary but insufficient approach to promoting later literacy success. (Lonigan, Schatschneider, & Westberg, 2008, p. 78).
Far from saying that oral language skills were unimportant to the development of literacy, our call was to move beyond the narrow focus on vocabulary or creating “language rich environments” often found in discussions in early childhood education to a broader and more detailed account on what aspects of oral language require attention and how these skills can be promoted. Indeed, the modest correlations between the global oral language category and later decoding, reading comprehension, or spelling suggests that a general focus on something labeled “oral language” is unlikely to provide much literacy benefit to young children.
In many ways, these critiques appear to be echoing what we highlighted in the Report: that oral language skills are substantially more important for reading comprehension than for decoding, that a broader array of oral language skills beyond vocabulary appear to be required for reading comprehension, that more research on these dimensions of oral language is needed, and that there is substantially more evidence of positive impacts of instruction for increasing young children’s simple vocabulary than there is for promoting oral language skills beyond simple vocabulary (e.g., grammar, deep vocabulary knowledge, listening comprehension). Many of the critiques were written as if we had argued that oral language skills were unimportant for literacy or that we failed to review such research. We clearly did neither. Regardless, we welcome the concurrence on these significant points, and we hope that this level of concern and agreement translate into efforts designed to fill these gaps in research evidence..
Nevertheless, the authors of the critiques of the Report have gone well beyond what current evidence supports. Dickinson et al. seem to suggest that we excluded evidence that would have demonstrated that oral language skills are substantially more important than code-related skills since the importance of oral language skills increases over time whereas the importance of code-related skills diminishes. We are unaware of such evidence, and they do not provide any support of their claims. In considering these claims, we examined studies from the Report’s meta-analysis of predictive relations for early literacy skills. We identified studies that included long-term reading outcomes and both language and code-related predictors or that assessed reading outcomes at multiple grades and included language predictors, code-related predictors, or both.
A summary of the results of these studies (i.e., zero-order or partial correlations with reading comprehension or reading composite variables) is shown in Table 1. MacDonald and Cornwell (1995) administered reading and spelling measures to 24 17-year-olds who had completed reading, vocabulary (PPVT-R), and phonological awareness measures when they were in kindergarten. Partial correlations (controlling for SES and the alternative predictive variable) with decoding and reading comprehension show a stronger role for phonological awareness than for vocabulary measured in kindergarten for reading outcomes in high school. Badian (2001) administered a verbal IQ (WPPSI) and an orthographic matching measure to 96 children 6 months before kindergarten entry and two phonological awareness measures when the children were in kindergarten. In first, third, and seventh grades, these same children were administered reading measures. Zero-order correlations of the preschool and kindergarten measures with the reading comprehension measures at grades 1, 3, and 7 demonstrate significant predictive relations for orthographic knowledge in preschool and phonological awareness in kindergarten with reading at each grade. Moreover, for all code-related predictors, correlations with reading comprehension were statistically significant when verbal IQ was controlled.
Table 1.
Grade for Reading Outcome |
||||||||
---|---|---|---|---|---|---|---|---|
Study | Predictor Variable | K | 1 | 2 | 3 | 6 | 7 | 10 |
MacDonald & Cornwall (1995) | PA | .49/.20a | ||||||
PPVT-R | .21/.12a | |||||||
Badian(2001) | Verbal IQ | .50 | .50 | .60 | ||||
PA (Syllable) | .47 | .25 | .46 | |||||
PA (Rhyme) | .47 | .49 | .51 | |||||
Orthography | .40 | .47 | .44 | |||||
Butler et al. (1985) | Psycholinguistic | .44 | .40 | .49 | .47 | |||
Language | .40 | .47 | .49 | .48 | ||||
Walker et al. (1994) | MLU | .55 | .36 | .58 | .43 | |||
# of Words | .62 | .32 | .63 | .43 | ||||
Sénéchal & LeFevre (2002) | PPVT-R | .14 | .53 | |||||
Listening Comp. | .16 | .38 | ||||||
PA | .50 | .73 | ||||||
Alphabet Knowledge | .44 | .39 |
Notes.
Partial correlations for decoding/reading comprehension controlling for SES and alternative predictor variable (i.e., for PA, PPVT-R scores were controlled).
PA = phonological awareness; PPVT-R = Peabody Picture Vocabulary Test--Revised; MLU = mean length of utterance.
Butler, Marsh, Sheppard, and Sheppard (1985) examined the predictive relations of a psycholinguistic factor that included scores from an auditory closure test and a sound blending test and a language factor with a composite reading measure that included both decoding and reading comprehension tasks. Measures that comprised the psycholinguistic and language factors were administered to 392 kindergarteners. Reading measures were administered to these same children in grades 3 and 6. Zero-order correlations for the two factors with the composite reading measure show roughly equal and stable predictive relations for both language and code-related skills across time. Moreover, in multiple regressions, both the psycholinguistic and the language factors were significant predictors of reading at each grade level. Sénéchal and LeFevre (2002) reported correlations between vocabulary, listening comprehension, phonological awareness, and alphabet knowledge measured in kindergarten and reading outcomes in first and third grades. Although the results of Sénéchal and LeFevre’s study suggested a larger role for language skills at the later reading assessment, code-related skills continued to be as or more strongly related to reading from the first to third grade assessments. Data from the Walker, Greenwood, Hart, and Carta (1994) study, which was cited by Dickinson et al. in support of their position, do not suggest increasing importance of oral language or stronger relations with reading than the code-related skills included in other studies.
Although this is not an exhaustive summary of such results, the studies summarized in Table 1 are consistent with the findings of all the studies on this topic summarized by NELP, and they provide no evidence of an increasing role for oral language across time, more durable predictive relations between oral language and reading than between code-related skills and reading, a diminishing role of code-related skills, or that oral language skills are more important for reading comprehension than are code-related skills. As demonstrated by Storch and Whitehurst (2002), children’s oral language skills have increasing importance as measures of reading move from those that primarily measure decoding to those that measure reading comprehension; however, code-related skills remain important throughout. Consequently, there is no support for the claims of Teale et al. that (a) we did not address how what is taught in preschool and kindergarten related to literacy in third grade and beyond, (b) the predictor skills we identified are unlikely to be related to more mature reading comprehension (they are), or (c) that the most significant factor in upper-grade reading comprehension is background knowledge (this may be true; however, no studies support the claim).
Although one could work backwards and create a hypothetical chain of causal links from highly proficient reading comprehension skills in mature readers through well-developed background knowledge to strong language skills to something taught in preschool (e.g., see Neuman), such a chain of causal links would merely represent a hypothesis. This hypothesis would need to be tested and supported by data. At present, there are few (if any) studies that would allow tests of these hypotheses. We do wonder, however, what the relation between solving science problems and reading comprehension is--which is the meta-analytic evidence cited by Neuman as supporting her hypothesis. Curiously, the results of that meta-analysis indicate that interventions that emphasized declarative knowledge had little to no impact on students’ abilities to solve problems, which “… varied from algorithmic trigonometry problems to complex but closed physics and chemistry problems” (Taconis, Ferguson-Hessler, & Broekkamp, 2001, p. 461). Although that study found that interventions that emphasized “strategic knowledge” negatively impacted problem solving, this seems a bit different than providing the skills necessary to decode words. The importance of code-related skills to reading does not imply that language skills should be ignored; similarly, the importance of language skills to reading does not imply that code-related skills should be ignored.
Impacts of shared-reading activities
Schikendanz and McGee provide an alternative summary of the shared-reading interventions included in the Report. Their conclusions are largely consistent with those of the Panel; however, they suggest a greater impact of shared-reading interventions for younger children and for expressive language measures than for older children and receptive measures. These are reasonable interpretations of the studies. During its review, the Panel also considered alternative groupings of studies and effects. As the Report noted, it is impossible to unambiguously attribute anything to these differences given that this variable was confounded with others, such as risk status.
Regarding a stronger impact of shared-reading on expressive measures than on receptive measures, we have noted this pattern in our own shared-reading studies (e.g., Arnold, Lonigan, Whitehurst, & Epstein, 1994; Lonigan, Anthony, Bloomfield, Dyer, & Samwel, 1999; Lonigan & Whitehurst, 1998)--although it is not a consistent finding. However, we are at a loss to provide a theoretical rationale for why such a finding should be expected. Similarly, we did not include some outcomes in our analyses because the measure was often confounded with experimental condition. Schikendanz and McGee also question the non-inclusion of some studies. We could not include studies in our analyses because they had serious design or analytic problems that precluded calculating appropriate effect sizes or interpreting the study results, and we could not include studies that fell outside our inclusion criteria (such as dissertations or foreign language publications)--the studies the various critics question were not included for these reasons. We agree completely with Schikendanz and McGee that if shared reading is to be used as an instructional vehicle for improving children’s language, there needs to be more high quality research using a broad range of outcome measures beyond vocabulary.
Effects of Parent and Home Programs
Dail and Payne lament that our summary of parent and home intervention programs included studies of such a wide array of approaches. Given the limited research evidence concerning such programs, we thought it prudent to be as inclusive, rather than adopting a narrow ideological stance that would have precluded consideration of relevant data. We were disappointed in the number of well-conducted studies that allowed estimation of causal impacts that had been published in peer-reviewed outlets and that the retrieved studies had little in common that would allow much meaningful analysis to examine the effects of various factors. Had we included non-peer reviewed studies, we would not have substantially increased the yield, but would have included the several randomized evaluations of Even Start that show few beneficial effects of these programs.
Dail and Payne assert causal claims without appropriate evidence and even seem a bit chagrined at the panel’s unwillingness to endorse this approach. They claim their family literacy program “works” and explain why it must work, but fail to show evidence of success--not a surprising failure, given that their research methods cannot prove a learning impact (without control groups it is impossible to determine an advantage for a program particularly at these age levels when children are learning so fast). They say we erred in examining studies of programs that did not match their ideological stance. Without taking any position on their beliefs, how does one explain the positive benefits obtained from studies of the instructional approaches that Dail and Payne eschew for ideological reasons?
Early literacy and English-language learners
We agree completely with Gutiérrez et al. that there is a need for more research on the early literacy for children who are English-language learners (ELLs). One of our frustrations was the inability to address questions that were specifically about children who are ELLs because there was insufficient data. Gutierrez’s critique reflects a common misunderstanding of meta-analysis. Although the researchers who published the studies summarized in NELP often included diverse samples of children (including ELLs) in their studies, they rarely reported their data separately; therefore, we were blocked from evaluating the relative impact of these instructional approaches on these groups. Thus, ELL children contributed data to these analyses, and when there were sufficient subgroup data to allow comparison, we found scant evidence that child or family characteristics moderated the overall conclusions. The lack of data on ELLs is evident with studies of older students as well (Shanahan & Beck, 2006), though not to the extent that NELP uncovered with preschool and kindergarten children.
Because there was little evidence that child or family characteristics moderated the conclusions concerning the skills that predicted later literacy or that identified instruction that promoted these skills, we argued that there was no reason to withhold these educational opportunities from any children. Gutiérrez et al. take exception with this recommendation. The majority of their critique concerns the ways that children whose home language is not English may differ from children whose home language is English. They correctly note that there is insufficient empirical evidence to determine whether or not a host of potential linguistic, socio-cultural, and family factors changes the nature of the relations between early skills and later reading, or, even, that a different set of skills than those identified for children whose home language is English would be identified. What troubles us is that although they question how these results apply to young ELLs, they do not offer any concrete alternatives. We wonder, then, what an early childhood professional is to do when faced with one or more children whose home language is not English?
Research published subsequent to the NELP synthesis provides insight into the development of early literacy skills of children who are ELLs. Similar to the findings of studies with school-age ELLs (August & Shanahan, 2006), some of this recent research indicates that there is significant consistency in the structure and function of code-related skills between monolingual and bilingual children (Anthony et al., 2006; 2009; Branum-Martin et al., 2006). A recent study reported by Branum-Martin et al. (2009) indicates that measuring language-related skills across languages is complicated by child factors, instructional factors, and type of measure, with more within-language consistency than between-language consistency. Data from prediction studies indicate that within-language predictions are stronger than between-language predictions (e.g., Anthony et al., 2009; Farver, Nakamoto, & Lonigan, 2007; Gottardo & Mueller, 2009), and these studies suggest that the skills identified as strong predictors of later literacy work similarly for children who are ELLs.
There are still few studies that evaluate the effectiveness of instructional practices in promoting early literacy skills with these children. An exception is the Farver, Lonigan, and Eppe (2009) study, with Spanish-speaking ELLs. This study found that interventions like those identified in the Report as effective for promoting oral-language and code-related skills yielded moderate to large and statistically significant effects (i.e., effect sizes ranged from .40 to .94) relative to a group of children who only participated in their Head Start activities. These results support the Panel’s conclusion that there is no reason that the code-focused and shared-reading instructional activities identified as effective should not be used with children whose home language is not English.
As far as we can tell, Orellana and D’warte’s primary critiques of the NELP Report are that our analyses were directed at answering questions concerning the predictors of reading and writing, and we did not focus on the types of skills on which they conduct research. We agree completely. Orellana and D’warte wonder who determines which skills matter in literacy. They want to impose a definition of literacy politically, which is markedly different from the empirical approach adopted by NELP. The panel rejected the idea of using expertise or standing to impose a vision of what they thought might be important but instead provided a thorough examination of the relevant empirical data measuring the relation of skills to later conventional literacy. Orellana and D’warte suggest that we erred in not including measures of aesthetic sensibilities, critical awareness, or the ability to tailor messages to particular audiences as prerequisite literacy skills for 3- and 4-year-old children--a list drawn apparently, given the citations, from studies of high school students and opinion pieces.
When everything is literacy, the term “literacy” no longer has any useful meaning. At what point should skills that are useful for something not be included in the category “things that are literacy”? Things beyond the usage of the term that has its roots in the word’s etymology (i.e., from Middle English, literat; from Latin litteratus–marked with letters; literate, from litterae–letters; literature, from plural of littera) typically are contextualized by a modifier, as in “computer literacy,” “mathematical literacy,” “artistic literacy,” or “transcultural literacy.” Each of these things is useful to someone and a worthy topic of investigation; however, they are not reading and writing per se, which is how we defined “conventional literacy.” Without doubt, the pragmatics of being able to read or write different genre, for different purposes or audiences, or to be able to negotiate differences in written versus spoken grammar belong in the domain of conventional literacy; however, there are no longitudinal studies that would help identify the precursors to these things.
The state of early childhood education today
We agree with Teale et al. that the evidence summarized in the Report does not constitute a mandate to teach code-related skills to the exclusion of meaning-related skills like vocabulary, grammar, and background knowledge. Indeed, we believe that the evidence in the Report indicates a need to go beyond simple vocabulary and nebulous recommendations to create “language rich environments.” At present, there is strong evidence for instructional activities that promote code-related skills. Unfortunately, evidence for instructional activities that promote oral language skills is less compelling. Whereas there is clear support for shared-reading interventions resulting in improved vocabulary skills, there is less evidence concerning effective instructional strategies for other meaning-related skills such as listening comprehension or background knowledge and no evidence that even the most powerful of the shared-reading interventions results in improved reading skills. This gap in the research base needs to be filled.
There is a degree of resistance to literacy instruction in the early childhood community. Much early childhood education eschews the idea of teachers determining instructional content and activities. Between 59 and 70 percent of Head Start and other early childhood education programs serving children who are at-risk of later academic difficulties, use either the High/Scope Curriculum or the Creative Curriculum (Jackson et al., 2007; U.S. Department of Health and Human Services, 2005). Neither curriculum has causally interpretable research evidence to indicate that its use results in increased development of early literacy, other pre-academic, or socio-emotional skills relative to alternative curricula. Much of this “instructional tension” in early childhood education probably owes its genesis to the expert consensus recommendations embodied in the practice guides concerning “developmentally appropriate practice” produced by the National Association for the Education of Young Children. Whereas these guides have increased in their focus on specific skills and intentional instruction, the fact that the early guides, which were not based on meaningful evidence, declared such educational practices “developmentally inappropriate” has created a legacy that will continue for some time.
Although NELP highlighted strong evidence for promoting both code- and meaning-focused skills through the use of focused, teacher-directed, and intentional instruction, the Report also explicitly noted that none of the studies on effective instructional practices involved a model of instruction that relied on whole-group choral recitation. The implications of the evidence summarized by NELP is that what one would observe in “a really good preschool” (Neuman) should depend on the strengths and needs of children in the classroom. Many children are likely to do well in a traditional early childhood classroom with relatively little teacher-directed instruction. Some will not, however, and the evidence summarized by the NELP Report provides a guide to identify those children who are not on a developmental trajectory to succeed. Evidence for effective instructional strategies summarized in the Report provides a starting point for actions that can help put all children on a trajectory of success. Some of those things will look like “fiddling around with sounds associated with printed letters,” playing with sounds in words, and engaging in meaningful and extended language interactions with the teacher. Such a preschool will be “really good” because it is responsive to the needs of individual real children--not just to the idealized children who come to preschool well-equipped to seek out the knowledge they will need and who have the tools to do much of the work on their own.
Conclusions
As noted by Whitehurst and Lonigan (1998), there are many “pleasing ideas” in the realm of early literacy. When these ideas are subjected to the light of empirical scrutiny, however, not all emerge unscathed. It is almost a certainty that when a set of studies is subjected to the rigors of a meta-analysis, someone’s sacred cow will be threatened. These critics are correct that the NELP Report does not provide simple answers or espouse a mandate for early childhood education. It summarizes the available evidence, with all of its nuances and blemishes. We welcome the fact that the Report has researcher, practitioners, and policymakers discussing the implications of current evidence for improving early childhood education, and we look forward to continuing the conversation.
Contributor Information
Christopher J. Lonigan, Florida State University
Timothy Shanahan, University of Illinois at Chicago.
References
- Anthony JL, Solari EJ, Williams JM, Schoger KD, Zhang A, Branum-Martin L, Francis DJ. Development of bilingual phonological awareness in Spanish-speaking English language learners: The roles of vocabulary, letter knowledge, and prior phonological awareness. Scientific Studies of Reading. 2009;13:535–564. [Google Scholar]
- Anthony JL, Williams JM, McDonald R, Corbitt-Shindler D, Carlson CD, Francis DJ. Phonological processing and emergent literacy in Spanish-speaking preschool children. Annals of Dyslexia. 2006;56:239–270. doi: 10.1007/s11881-006-0011-5. [DOI] [PubMed] [Google Scholar]
- Arnold DS, Lonigan CJ, Whitehurst GJ, Epstein JN. Accelerating language development through picture-book reading: Replication and extension to a videotape training format. Journal of Educational Psychology. 1994;86:235–243. [Google Scholar]
- Branum-Martin L, Mehta PD, Fletcher JM, Carlson CD, Ortiz A, Carlo M, Francis DJ. Bilingual phonological awareness: Multilevel construct validation among Spanish-speaking kindergarteners in transitional bilingual education classrooms. Journal of Educational Psychology. 2006;98:170–181. [Google Scholar]
- Branum-Martin L, Mehta PD, Francis DJ, Foorman BR, Cirino PT, Miller JF, Iglesias A. Pictures and words: Spanish and English vocabulary in classrooms. Journal of Educational Psychology. 2009;101:897–911. [Google Scholar]
- Connor CM, Schatschneider C, Morrison FJ, Ponitz CC, Piasta SB, Fishman BJ, Crowe EC, Glasney S, Underwood PS. Back to the future: Contrasting scientific styles in understanding reading. Educational Researcher. 2009;38:537–540. [Google Scholar]
- Farver JM, Lonigan CJ, Eppe S. Effective early literacy skill development for young English language learners: An experimental study of two methods. Child Development. 2009;80:703–719. doi: 10.1111/j.1467-8624.2009.01292.x. [DOI] [PubMed] [Google Scholar]
- Farver JM, Nakamoto J, Lonigan CJ. Assessing preschoolers’ emergent literacy skills in English and Spanish with the Get Ready to Read! Screening Tool. Annals of Dyslexia. 2007;57:161–178. doi: 10.1007/s11881-007-0007-9. [DOI] [PubMed] [Google Scholar]
- Gottardo A, Mueller J. Are first- and second-language factors related in predicting second-language reading comprehension: A study of Spanish-speaking children acquiring English as a second language from first to second grade. Journal of Educational Psychology. 2009;101:330–344. [Google Scholar]
- Jackson R, McCoy A, Pistorino C, Wilkinson A, Burghardtm J, Clark M, et al. National evaluation of Early Reading First: Final report to Congress. Washington, DC: Institute of Education Science; 2007. [Google Scholar]
- Lonigan CJ, Anthony JL, Bloomfield B, Dyer SM, Samwel C. Effects of two preschool shared reading interventions on the emergent literacy skills of children from low-income families. Journal of Early Intervention. 1999;22:306–322. [Google Scholar]
- Lonigan CJ, Whitehurst GJ. Relative efficacy of parent and teacher involvement in a shared-reading intervention for preschool children from low-income backgrounds. Early Childhood Research Quarterly. 1998;17:265–292. [Google Scholar]
- Lonigan CJ, Schatschneider C, Westberg L. Developing Early Literacy: Report of the National Early Literacy Panel. Washington, DC: National Institute for Literacy; 2008. Identification of children’s skills and abilities linked to later outcomes in reading, writing, and spelling; pp. 55–106. [Google Scholar]
- Sénéchal M, LeFevre J. Parental involvement in the development of children’s reading skill: A five-year longitudinal study. Child Development. 2002;73:445–460. doi: 10.1111/1467-8624.00417. [DOI] [PubMed] [Google Scholar]
- Shanahan T. Review of Interdisciplinary Approaches to Literacy and Development. Comparative Education Review. (in press) [Google Scholar]
- Shanahan T, Beck I. Effective literacy teaching for English-Language Learners. In: August D, Shanahan T, editors. Developing literacy in second-language learners: Report of the National Literacy Panel on Language-Minority Children and Youth. Mahwah, NJ: Lawrence Erlbaum Associates; 2006. pp. 415–488. [Google Scholar]
- Storch SA, Whitehurst GJ. Oral language and code-related precursors to reading: Evidence from a longitudinal structural model. Developmental Psychology. 2002;38:934–947. [PubMed] [Google Scholar]
- Street BV. Literacy in theory and practice. Cambridge: Cambridge University Press; 1984. [Google Scholar]
- Taconis R, Ferguson-Hessler MGM, Broekkamp H. Teaching science problem solving: An overview of experimental work. Journal of Research in Science Teaching. 2001;38:442–468. [Google Scholar]
- U.S. Department of Health and Human Services, Administration for Children and Families. Head Start impact study: First year findings. Washington, DC: Author; 2005. May, [Google Scholar]
- Whitehurst GJ, Lonigan CJ. Child development and emergent literacy. Child Development. 1998;69:848–872. [PubMed] [Google Scholar]