Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Jan 18.
Published in final edited form as: Behav Res Methods. 2008 Feb;40(1):278–289. doi: 10.3758/brm.40.1.278

New and updated tests of print exposure and reading abilities in college students

Daniel J Acheson 1, Justine B Wells 1, Maryellen C MacDonald 1
PMCID: PMC3022331  NIHMSID: NIHMS244840  PMID: 18411551

Abstract

The relationship between print exposure and measures of reading skill was examined in college students (N = 99, 58 female; mean age = 20.3 years). Print exposure was measured with several new self-reports of reading and writing habits, as well as updated versions of the Author Recognition Test and the Magazine Recognition Test (Stanovich & West, 1989). Participants completed a sentence comprehension task with syntactically complex sentences, and reading times and comprehension accuracy were measured. An additional measure of reading skill was provided by participants’ scores on the verbal portions of the ACT, a standardized achievement test. Higher levels of print exposure were associated with higher sentence processing abilities and superior verbal ACT performance. The relative merits of different print exposure assessments are discussed.


Although most adults in Western societies are literate, there are widespread differences in the amounts and types of material people read. Many studies have suggested that differences in the types and amounts of reading and writing that people undertake lead to individual differences across many cognitive dimensions, a result that is consistent with the important role of practice in the development of cognitive abilities and other skills (see, e.g., Simon & Newell, 1974). For example, considerable evidence suggests that variability in readers’ print exposure—the amount of text they read—is associated with variability in their orthographic and phonological processing skill, including differences in lexical decision latency (Chateau & Jared, 2000), reading comprehension (Cipielewski & Stanovich, 1992), nonword naming (McBride-Chang, Manis, Seidenberg, Custodio, & Doi, 1993), vocabulary size (Frijters, Barron, & Brunello, 2000), knowledge of homophone spellings (Stanovich & West, 1989), and verbal fluency measures (Stanovich & Cunningham, 1992). Other studies have examined the relationship between print exposure and more global skills, and suggest that, through reading more frequently, individuals gain the opportunity to learn more about semantic relations, concepts, categorization, history, and culture, and to acquire skills such as logical reasoning (Scribner & Cole, 1981; West, Stanovich, & Mitchell, 1993).

Despite the robust relationship between print exposure and verbal, nonverbal, and reading skills, accurately measuring print exposure levels in individuals has proven to be difficult. A standard approach is to assess print exposure through self-report measures, commonly in the form of questionnaires in which participants are asked to report such information as how much time they spend reading per week and how much they enjoy reading (e.g., Greaney, 1980; Guthrie, 1981; Lewis & Teale, 1980). Cunningham and Stanovich (1990, 1991) questioned the validity of such measures, suggesting that it is very difficult for participants to answer these questions in a reliable manner. A more involved form of self-report, in which individuals keep a daily log of their reading behaviors, has also been employed on occasion (e.g., Anderson, Wilson, & Fielding, 1988), and these diaries appear to provide a fairly reliable assessment of print exposure (Chateau & Jared, 2000). Both forms of self-report, however, are subject to criticism concerning the degree to which they promote socially desirable responding in the form of exaggerated reports of reading frequency (e.g., Ennis, 1965; Sharon 1973–1974; West et al., 1993; Zill & Winglee, 1990).

In an attempt to circumvent the difficulties associated with self-report assessments of print exposure, Stanovich and West (1989) developed the Author Recognition Test (ART) and the Magazine Recognition Test (MRT). Later, a similar test, the Title Recognition Test (TRT), using the same logic as the ART and the MRT, was developed as an additional measure of print exposure (Cunningham & Stanovich, 1990). In these tests, participants are given a list of authors, magazines, or book titles intermixed with a set of compelling foils, and are asked to indicate which items they recognize as the names of real authors, magazines, or book titles, respectively. Stanovich and colleagues suggested that the recognition test format avoids socially desirable responding in two ways. First, participants are not being directly interrogated about the time they spend reading. Second, participants are discouraged from claiming to recognize more names than they actually know, since they are told that a penalty is associated with marking a foil. In subsequent studies, the ART, MRT, and TRT have been validated as good indicators of individual differences in exposure to print (Stanovich & Cunningham, 1992; West et al., 1993) and subsequently have been related to many of the measures of phonological and orthographic skill noted above.

Although a growing body of work has related print exposure measures to measures of reading skill, most such studies have related print exposure to lexical processing, using tasks such as lexical decision and nonword naming. These tasks clearly tap important components of reading and comprehension skill, but there are other domains of language comprehension that are relatively unexplored. For example, few studies have attempted to relate print exposure measures to sentence-level comprehension (but see Stanovich & Cunningham, 1992), and none have related print exposure to word reading speed and comprehension accuracy within sentence contexts. Print exposure is a likely correlate of syntactic-level processes because syntactically complex structures are generally found in greater proportion in written text than in speech (Biber, 1986), thus providing important experience relevant to syntactic comprehension. Moreover, several studies using word reading time and comprehension accuracy measures have demonstrated substantial individual differences in comprehension of syntactically complex sentences in college student readers (Just & Carpenter, 1992; King & Just, 1991; Pearlmutter & MacDonald, 1995), and researchers have hypothesized a relationship between these results and variations in print exposure in relatively good and poor college student readers (MacDonald & Christiansen, 2002; Pearlmutter & MacDonald, 1995). These claims are interesting, but there are two concerns to be addressed. First, the hypothesized link between print exposure and sentence comprehension abilities has not been accompanied by direct measures of print exposure in the readers participating in the studies of sentence comprehension. Second, some studies have failed to demonstrate clear individual differences in these comprehension measures in groups that differ widely in working memory or other assessments typically thought to correlate with comprehension skills (Waters & Caplan, 1996). There is thus a gap between theory and data in this area, in that there is abundant evidence of the role of print exposure in lexical tasks, but little real evidence directly linking print exposure and sentence-level reading processes.

More broadly, a second gap in the literature concerns the age range over which print exposure is associated with individual differences in reading skill. Much of the work that has been conducted to examine this relationship has focused on children (e.g., Allen, Cipielewski, & Stanovich, 1992; Cipielewski & Stanovich, 1992) or has considered differences across populations such as typical versus dyslexic or other atypical readers (e.g., McBride-Chang et al., 1993). Some studies have investigated the effects of print exposure in adults (e.g., Beech, 2002; Stanovich & Cunningham, 1992; Stringer & Stanovich, 2000; West et al., 1993), but there is still relatively little evidence concerning how print exposure measures relate to individual differences in the reading skill of literate adults, such as college students. We address this issue in the present study by considering how both self-report and objective (i.e., ART and MRT) print exposure measures relate to reading comprehensions abilities in college students. Because the homogeneity of the sample is likely to result in a restricted range in both print exposure and reading measures, we used multiple assessments of both of these constructs to improve our chances of observing a relationship between them.

To assess sentence-level comprehension processes, we used the standard self-paced reading method that is a common measure of sentence comprehension processes in studies with college student samples, which assesses both reading speed and comprehension accuracy. We supplemented the sentence task with more global measures of verbal skill using scores on the verbal portions of a standardized achievement test, the ACT (see www.actstudent.org/testprep/descriptions for details), which assesses both reading comprehension ability and a wide variety of reading-relevant skills, such as knowledge of grammar, proper English usage, rhetorical skills, and the ability to draw inferences from written passages. Although the ACT is a proprietary test, the frequency of its administration to college students ensures that the scores of many individuals are available. These standardized tests thus provide a broad, extensively normed measure of individuals’ reading and comprehension abilities.

We used multiple measures of print exposure both to have the best chance of finding meaningful amounts of variation and to address the question of how reliably self-report measures index print exposure in the college student sample. We developed a three-part self-report questionnaire, attempting to avoid some of the pitfalls of similar questionnaires that have been used in past research. The first part of the questionnaire (the Time Spent Reading section) assessed the amount of time participants typically spent reading. Many previous assessments (e.g., Stanovich & West, 1989) have asked participants a single, general question regarding how much time (usually how many hours in a typical week) they spend reading. It may be difficult for participants to think about all the reading they do in a week, and sensitivity may thus be lost in such assessments. Individuals often have substantial difficulty estimating both the time they spend engaging in particular activities and the duration of certain events (e.g., Cohen, 1971; Guay, 1982), and the general question concerning reading may encourage respondents to think about reading in traditional domains (novels, textbooks) and neglect less traditional ones (email, browsing the Internet, etc.). In an attempt to promote more accurate estimations of reading activities, the Time Spent Reading section asked seven individual questions addressing how many hours per week participants spend reading specific types of material (e.g., textbooks, e-mail, newspapers).

The Time Spent Writing section of the questionnaire assessed how much time individuals spend writing, on the view that writing activities could also be a strong correlate of reading comprehension and other measures of reading skill. This section also contained seven questions about how much time participants spend writing various types of materials (e-mail, job-related writing, papers for classes, etc.).

In the Comparative Reading Habits (CRH) section of the questionnaire, participants were asked to compare their own reading habits with those of their peers (other college students) on five dimensions: time spent reading, enjoyment of reading, reading speed, complexity of reading material, and comprehension of reading material. In addition to probing aspects of print exposure not covered by the time reports, the inclusion of this section allowed us to compare the predictive power of two different forms of assessing reading activities: a raw report of hours per week spent reading or writing and a comparative assessment relative to peers. Extensive research on social comparisons has shown that explicit comparative judgments yield more accurate self-report data than do noncomparative assessments (e.g., Bandura, 1997), and the same may be true of readers’ assessments of their print exposure.

METHOD

Participants

Ninety-nine undergraduates (58 female, 41 male) volunteered in exchange for either course credit or cash compensation.

Materials

We developed the three-part Reading Habits Questionnaire as well as materials for two objective measures of print exposure (updated versions of the ART and the MRT), and sentence materials for presentation in a self-paced reading task that assessed reading speed and comprehension accuracy.

Author Recognition Test

In the original Stanovich and West (1989) ART, participants read a list of names and identified which ones were names of authors of works of fiction. The genuine authors in the original ART included those whose work students were likely to have read in high school, some more literary choices, and authors of novels that were popular at the time the test was developed. Pilot testing revealed that many authors on the original list that had been popular in the 1980s were now unfamiliar to college students, and so we attempted to develop a list that reflected a mix of classic and more recently popular authors. We tested multiple versions of the measure (using a total of 105 additional participants, none of whom participated in the present study), replacing authors who had extremely high or extremely low identification rates, so as to settle on a list of authors of generally moderate familiarity to our sample, together with foil names that pilot participants erroneously identified as authors somewhat frequently. Our final list included 65 real authors and 65 foils, whereas the original ART had contained 50 of each. Of the 65 real authors, 15 were retained from the original Stanovich and West ART; all authors from the revised test, together with their rates of selection by the final 99 participants, are shown in Appendix A.

APPENDIX A.

Names and Selection Rates (N = 99) of Real Authors Used on the Author Recognition Test

Name Selection
Rate (%)
Name Selection
Rate (%)
Name Selection
Rate (%)
Names Maintained From Stanovich and West (1989)
Maya Angelou 78 Dick Francis 10 Toni Morrison 58
Isaac Asimov 46 Stephen King 99 Sidney Sheldon 23
Jean M. Auel 11 Judith Krantz 23 Danielle Steel 88
James Clavell 8 Robert Ludlum 22 J. R. R. Tolkien 88
Jackie Collins 30 James Michener 19 Alice Walker 34
New Names
Isabel Allende 15 F. Scott Fitzgerald 88 Vladimir Nabokov 25
Margaret Atwood 29 Sue Grafton 26 Joyce Carol Oates 26
Ann Beattie 10 John Grisham 88 Michael Ondaatje 7
Samuel Beckett 29 Ernest Hemingway 99 George Orwell 80
Saul Bellow 12 Brian Herbert 2 James Patterson 18
T. C. Boyle 19 Tony Hillerman 9 Thomas Pynchon 6
Ray Bradbury 58 John Irving 49 Ayn Rand 38
Willa Cather 28 Kazuo Ishiguro 5 Salmon Rushdie 22
Raymond Chandler 7 James Joyce 53 J. D. Salinger 77
Tom Clancy 95 Jonathan Kellerman 7 Jane Smiley 13
Clive Cussler 13 Wally Lamb 24 Paul Theroux 7
Nelson Demille 4 Harper Lee 47 Kurt Vonnegut 65
Umberto Eco 9 Jack London 72 E. B. White 72
T. S. Elliot 85 Bernard Malamud 8 Thomas Wolfe 26
Ralph Ellison 21 Gabriel García Márquez 20 Virginia Woolf 70
Nora Ephron 9 Anne McCaffrey 23 Herman Wouk 8
William Faulkner 73 Margaret Mitchell 11

Magazine Recognition Test

We developed an updated version of the original Stanovich and West (1989) MRT in which participants are given a list of titles and are instructed to mark those titles that they think are names of real magazines. As with the ART, we sought to increase the test’s sensitivity both by expanding the number of items from 100 to 130 (65 real magazine titles and 65 plausible foils) and by piloting a longer version of the MRT (using a total of 33 participants, none of whom participated in the present study), eliminating magazines no longer being published, ones that were too easy (correctly selected by nearly all participants), and very obscure titles (rarely identified as real magazines). Like the Stanovich and West items, most of the real titles were those of popular magazines in a wide variety of genres. Sixteen of the real titles from Stanovich and West were maintained, and 49 new titles were added; see Appendix B for real magazine items and their selection rates.

APPENDIX B.

Real Magazine Titles and Selection Rates (N = 99) used on the Magazine Recognition Test

Name Selection
Rate (%)
Name Selection
Rate (%)
Name Selection
Rate (%)
Titles Maintained From Stanovich and West (1989)
Business Week 58 Harper’s Magazine 40 Outdoor Life 44
Car & Driver 62 Hot Rod 52 Popular Science 78
Discover 46 Jet 23 Psychology Today 25
Ebony 65 Ladies Home Journal 51 Redbook 68
Family Circle 45 Motor Trend 46 The Progressive 12
Field & Stream 64
New Titles
Atlantic Monthly 22 Gourmet 17 Self 38
Backpacker 11 Guitar Player 16 Ski Magazine 16
Biography 32 Hunting 2 Smithsonian 51
Black Enterprise 5 InStyle 64 Spin 68
Boating World 18 Maxim 86 Stuff 22
Bon Appetit 39 Men’s Health 69 Technology 4
Cat Fancy 34 Men’s Journal 25 The Source 21
Cigar Aficianado 26 Modern Bride 46 Ultimate Audio 3
Consumer’s Digest 80 Money 58 U.S. News & World Report 79
Country Living 52 Mountain Bike 11 Vegetarian Times 4
Details 10 Organic Gardening 8 Vibe 60
Flex 19 PC World 61 Wildlife Conservation 4
Food & Wine 29 Popular Mechanics 70 Wired 40
Fortune 60 P remiere 21 Women’s Day 52
Game Pro 28 Rosie 40 Working Mother 10
Golf World 28 Science News 14 Yoga Journal 7
Good Housekeeping 80

Reading Habits Self-Report

All questions from the new Reading Habits Self-Report are presented in Appendix C. In Section I (Time Spent Reading), participants were asked to estimate the amount of time they spend in a typical week reading certain types of material. Those who participated during the summer were instructed to think of a typical week during the school year. In Section II (Time Spent Writing), participants estimated how much time they spent writing different types of material. In Section III (Comparative Reading Habits), participants compared their own reading habits to those of other college students on a Likert scale ranging from 1 to 7, with higher numbers indicating greater amounts relative to peers. Each of the five questions in this section was intended to assess a particular aspect of participants’ reading habits relative to that of other college students: time spent reading, complexity of reading material, reading enjoyment, reading speed, and reading comprehension ability.

APPENDIX C.

Reading Habits Self-Reports

Section I: Reading Time Estimates
   Each participant indicated the number of hours that best reflected how much time he or she spent in a typical week reading each type of material listed below. The range of 0–7 h was provided on the questionnaire for participants to circle for each question; the highest number was presented as “7+” and was to be used to indicate 7 h or more per week reading a type of reading material.
  1. Textbooks

  2. Academic materials other than textbooks

  3. Magazines

  4. Newspapers

  5. E-mail

  6. Internet media (all subjects not including e-mail)

  7. Fiction books

  8. Nonfiction/special interest books

  9. Other categories (to be filled in by participant)

Section II: Writing Time Estimates
   Each participant indicated the number of hours that best reflected how much time he or she spent in a typical week writing each type of material listed below. The range of 0–7 h was provided on the questionnaire for participants to circle for each question; the highest number was presented as “7+” and was to be used to indicate 7 h or more per week writing a type of material.
  1. All forms of writing assignments required for classes

  2. Newspaper articles or Internet media not required for class (not including e-mail)

  3. Personal material (e.g., diaries, journals, letters)

  4. E-mail

  5. Creative writing not required for classes (e.g., fiction, poetry, plays)

  6. Job-related material not including e-mail (e.g., memos, reports, transcripts, etc.)

  7. Other categories (to be filled in by participant)

Section III: Comparative Reading Habits
   For each of the questions in this section, participants circled a number on a scale of 1 to 7, with higher numbers indicating greater amounts of the quantity in question (time, enjoyment, etc.).
  1. Compared to other college students, how much time do you spend reading all types of materials?

  2. Compared to the reading material of other college students, how complex do you think your reading material is?

  3. Compared to other college students, how much do you enjoy reading?

  4. Compared to other college students, how fast do you normally read?

  5. Compared to other college students, when reading at your normal pace, how well do you understand the reading material?

Sentence comprehension

The materials for a computerized reading task were comprised of 60 syntactically complex sentences, 12 for each of the five types. The sentences were unrelated in topic. A yes/no question to assess comprehension was prepared for each sentence; the correct answer was “yes” for half of the questions. The five types were (1) sentential complements (e.g., The scientist insisted that the hypothesis was being contemplated, for which the comprehension question [Q] was, Was the hypothesis being contemplated?), (2) subject relative clauses (e.g., The representative that denounced the president slammed the door after the meeting, for which Q was, Did the president slam the door?), (3) object relative clauses (e.g., The witness that the investigator contacted waited outside the small café, for which Q was, Did the investigator contact the witness?), (4) extended subordinate clauses (e.g., Although the potatoes were shredded very carefully by the assistant cook, they came out unevenly and were unattractive, for which Q was, Were the potatoes shredded carelessly?), and (5) multiple prepositional phrases (e.g., The professor of the class with weekly readings was pleased by the students, for which Q was, Was the professor unhappy with the students?).

Procedure

The tasks were completed during multiple sessions over a 3- to 4-week period as part of a larger study. The computer-based sentence comprehension data were collected on the 1st day, and the print exposure measures were completed in subsequent sessions.

Author and Magazine Recognition Tests

Each test contained 130 intermixed real and foil items and was printed on a single sheet of paper. Participants were instructed to mark the items they knew to be real authors or magazine titles, as appropriate. They were instructed not to guess, since a penalty would be given for all incorrect answers. Each participant’s score was the total number of correct authors or magazines marked minus the number of foils marked. Since there were 65 real items on each test, the highest possible score was 65 for each test.

Sentence comprehension

Sentences were presented on a computer screen using a word-by-word, subject-paced “moving window” display in which only one word of the sentence is visible at any time and dashes represent the locations of previous and upcoming words. The use of dashes permits relatively natural eye movements from one word position to the next, and several studies have shown that reading times in this paradigm correspond closely to reading times and eye fixation data when the entire sentence is in view (Just, Carpenter, & Woolley, 1982; Kennedy & Murray, 1984). The task is an extremely common one in studies of syntactic comprehension in young adults, including assessments of individual differences in sentence comprehension (King & Just, 1991; Pearlmutter & Mac-Donald, 1995).

At the beginning of each trial, all nonspace characters in a sentence were indicated by dashes on the computer screen. When the participant pressed the space bar, the first group of dashes was replaced by the first word of the sentence. Each subsequent keypress caused the next word to appear and the previous word to be replaced with dashes; reading times were measured for each word from the onset of its presentation to the next keypress. The keypress following the last word of the sentence removed the sentence and displayed the comprehension question in its entirety. Participants answered the question with keys labeled “Yes” and “No.” Participants received feedback on the correctness of their responses.

At the beginning of the task, participants were instructed to read at a normal pace while maintaining good comprehension. After the 10 practice trials, the 60 experimental trials were presented in random order, and participants’ word reading times and answer accuracy were recorded for each trial. The task required 20–30 min to complete.

Verbal achievement test scores

The ACT is a standardized achievement test taken annually by approximately 1.2 million high school students in lieu of, or in addition to, the Scholastic Achievement Test (SAT) (see www.act.org/news/aapfacts.html for more information about the ACT). It is a multiple-choice test similar to the SAT and is divided into four sections: math, science, English, and reading. It is scored on a 36-point scale and, as reported by the College Board, the developer of the SAT (www.collegeboard.com/sat/cbsenior/html/stat00f.html), an ACT score of 36 is comparable to 1600 on the SAT (the sum of the SAT verbal and quantitative portions). An ACT score of 30 is roughly equivalent to an SAT composite score of 1320–1350, and an ACT score of 25 roughly translates to an SAT composite score of 1130–1160. ACT scores have been validated as reliable predictors of future college performance (Noble, 1991) and college class placement in the subjects tested in the ACT (Ang & Noble, 1993).

All participants gave permission to access their student records, and we consulted these records for students’ scores on the verbal portions of the ACT. Of the 99 participants in this study, 78 had their ACT test scores on file, whereas only 15 had reported SAT scores. These proportions reflect the prevalence of ACT testing in the Midwest portion of the United States.

For those participants for whom we could obtain ACT scores, we used subscores for the reading and English portions of the test as general measures of achievement in reading and reading-related domains. The reading subcomponent tests two major aspects of comprehension: understanding of the literal information in written passages and ability to draw inferences from the content of these passages. The reading subcomponent is composed of four prose passages, each consisting of 80–100 lines, with topics in social studies, the humanities, sciences, and fiction (see www.actstudent.org/testprep/descriptions/readdescript.html for more detail). The English subcomponent of the exam tests two major areas: usage/mechanics of English and rhetorical skill. This subcomponent is comprised of five prose passages ranging between 5 and 15 lines and varying in subject matter. Test takers are required to answer multiple-choice questions about both specific sections of the prose passage and the passage as a whole (see www.actstudent.org/testprep/descriptions/engdescript.html for more information). Both subcomponents have been validated against a standard measure of reading comprehension, the Nelson–Denny test (Noble, 1988; Stiggins, Schmeiser, & Ferguson, 1978).

RESULTS

Three self-report print exposure scores were calculated for each participant on the basis of the sum of the responses in each self-report measure. For the Time Spent Reading and Writing sections, the participant’s score was the sum of the hours estimated per week for each of the reading and writing dimensions probed in each questionnaire, and in the case of the CRH survey, the participant’s score was the sum of the five Likert-scale responses. Composite measures were justified both by the significant pairwise correlations between the subcomponents for each self-report print exposure measure (most ps < .05—see Table 2) and by the construct being measured in each survey—namely, estimates of reading time, writing time, and CRH. As in Stanovich and West (1989), the ART and MRT were scored so that one point was awarded for each author correctly identified and one point was subtracted for each foil that was selected.

Table 2.

Means, SDs, and Correlations Among Individual Items of Reading habits Surveys

Reading Measure
Objective
Print Exposure
Word Reading Question ACT ACT
M SD Time Accuracy English Reading ART MRT
CRH
    1. Time 4.2 1.3 .05 .21* .20 .11 .14 .21*
    2. Complexity 4.4 1.0 −.14 .01 .16 .05 .18 .28**
    3. Enjoyment 4.8 1.6 −.02 .06 .26* .38** .52** .35**
    4. Speed 4.1 1.4 −.20* .09 .19† .32** .32** .21*
    5. Understanding 4.7 1.0 −.22* −.06 .20† .26* .21* .24*
Reading Time#
    6. Textbooks 4.3 2.5 .14 −.14 .06 −.12 −.33** − .18
    7. Academic 3.3 2.2 .13 .07 .10 .10 .11 .09
    8. Magazines 1.1 1.2 .19† .11 −.08 .07 .13 .15
    9. Newspapers 1.6 1.4 .04 .06 −.16 .18 .10 .10
  10. E-mail 2.5 1.7 .12 −.10 .05 .04 .19 .15
  11. Internet material 2.9 2.2 −.05 −.21* − .20 − .20 −.06 .09
  12. Fiction 2.4 2.4 .04 .00 .14 .34** .41** .24*
  13. Nonfiction 1.3 2.7 .07 −.07 .00 .11 .31** .24*
Writing Time#
  14. For class 3.4 2.2 .00 −.13 −.05 −.10 − .18 −.15
  15. Articles 0.4 0.8 −.11 .06 −.23* −.05 −.09 −.06
  16. Personal 1.6 1.7 .05 .01 .02 .11 .24* .20
  17. E-mail 2.3 1.6 .18 −.02 −.04 −.01 .15 .13
  18. Creative 0.6 1.3 −.05 −.06 .03 .13 .30** .18
  19. Job related 1.2 2.0 .14 .00 .06 .12 .35** .24*
Self-Report Reading Habits
CRH
Reading Time
Writing Time
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
.32**
.45** .27**
.28** .30** .61**
.20* .36** .38** .34**
.09 .11 −.14 −.04 .07
.32** .23* .08 −.03 .12 .33**
.19 .10 .08 .12 .14 −.03 .09
−.09 .04 .10 .05 .20 −.05 −.04 .46**
.33** .24* .25* .04 .16 .10 .29** .29** .02
−.06 .07 −.05 −.08 .13 .13 .08 .23** .39** .26**
.23* .06 .59** .35** .16 −.34** .03 .23** .25** .17 .00
.23* .12 .39** .27** .17 −.07 .17 .15 .20* .38** .10 .52**
−.05 .04 − .19 −.14 .00 .41** .43** .07 −.02 .20* .06 −.16 .02
.06 .21 −.03 .03 .06 .06 .06 .01 .26** −.03 .28** .05 .07 .04
.05 .10 .29** .18 .18† −.03 .04 .30** .13 .32** .00 .33** .31** .20* −.10
.31** .21 .19 .01 .18† .06 .32** .35** .09 .83** .21* .14 .38** .32** −.02 .35**
.07 .12 .31** .18 .26** −.12 −.09 .09 .10 .17 −.03 .32** .20* .04 −.02 .64** .13
.16 .22* .30** .18 .05 .00 .21* .20* .16 .33** .13 .37** .37** .00 .13 .11 .22* .12

Note—CRH, comparative reading habits; ART, Author Recognition Test; MRT, Magazine Recognition Test.

*

Correlation is significant at the .05 level (two-tailed).

**

Correlation is significant at the .01 level (two-tailed).

Correlation approaches significance ( .05 < p < .11).

Means and SDs on a scale of 1–7.

#

Means and SDs in hours.

For the sentence comprehension task, both accuracy and reading time per word were measured. The reading time data were analyzed only for those trials on which a participant correctly answered the subsequent comprehension question. The reading data were trimmed to remove all word reading times greater than 2,000 msec and all times greater than 2.5 SDs over a participant’s mean reading time, affecting 1.6% of the reading time data.

Table 1 presents the means and SDs for scores on the nine primary measures taken in the study. Table 2 presents means and SDs for each of the individual items of the self-report questionnaires. As the SDs suggest, most measures elicited considerable variability. One exception was sentence comprehension question accuracy in the self-paced reading task, on which all participants performed relatively well. Participants’ ACT scores were also less variable than some measures and were above the national average (in 2003, the English national average was 20.5 and the reading national average was 21.2), as reported by the ACT Testing organization (www.act.org).

Table 1.

Mean Scores (With SDs) on General Verbal, Reading, and Print Exposure Measures (N= 99)

Measure M SD
Reading
  Self-Paced Sentence Comprehension
    Average word reading time (msec) 357.41 78.76
    Overall sentence comprehension accuracy (%) 82.6 7.6
ACT* (n = 78)
  English 26.8 3.4
  Reading 28.3 4.7
Objective Print Exposure
  ART** 22.7 10.8
  MRT** 21.8 9.7
Self-Reported Reading Habits
  CRH 22.2 4.4
  Hours per week reading 19.4 7.4
  Hours per week writing 9.7 5.6

Note—ART, Author Recognition Test; MRT, Magazine Recognition Test; CRH, comparative reading habits.

*

Maximum possible score is 36.

**

Maximum possible score is 65.

Maximum possible score is 35.

Selection rates for real authors and magazines on the ART and MRT are presented in Appendix A and Appendix B, respectively. On the ART, selection rates ranged from 2% to 99%, and on the MRT they ranged from 2% to 86%. As the mean selection rates of 36% (ART) and 37% (MRT) suggest, the tests were fairly challenging.

Overall Correlations

Table 3 presents a matrix displaying correlations between scores on each of the measures shown in Table 1. Table 3 shows that the various measures of sentence comprehension and other verbal assessments were positively correlated. Average word reading time was reliably correlated with ACT reading scores. Word reading time in the self-paced reading task and participants’ reports of time spent reading were reliably correlated (p = .05) in that those who reported spending more time reading had longer reading times than those who reported spending less time reading, although the magnitude of this correlation was small (r = .20). One possible interpretation of the direction of this effect is that slower readers spend more time each week completing their reading assignments and other reading material, and hence report reading for longer periods each week.

Table 3.

Overall Correlations Between Measures of Reading Skill and Print Exposure

Reading Measure
Objective Self-Report Reading Habit
Word Print
Reading Question ACT ACT Exposure
Reading Writing
Time Accuracy English Reading ART MRT Time Time CRH
Word reading time
Question accuracy .16
ACT English −.18 .40**
ACT Reading −.31** .35** .60**
ART −.05 .16 .30** .29**
MRT −.14 .08 .28** .14 .64**
Time spent reading .20* .00 −.03 .08 .16 .22**
Time spent writing .13 .00 −.03 .04 .22* .17† .63**
CRH −.08 .19 .31** .37** .44** .41** .34** .28**
*

Correlation is significant at the .05 level (two-tailed).

**

Correlation is significant at the .01 level (two-tailed).

Correlation approaches significance (.05 < p < .11).

Many of the print exposure measures (ART, MRT, time spend reading, time spent writing, and CRH) were reliably correlated with one another. The pattern of these correlations revealed that, although the CRH survey correlated significantly with all of the other measures of print exposure (range of r = .27–.44), the largest correlations were between print exposure measures with similar outcome variables—namely, the ART with MRT estimates and the time spent reading with time spent writing estimates.

One of the primary goals of this study was to extend previous results linking print exposure and single word processing by examining the relationship between print exposure and sentence reading abilities. Table 3 reveals that the self-paced reading measures were consistently related to ACT scores (N = 78 instead of 99 for other measures), but they did not reliably correlate with measures of print exposure. Correlations between print exposure measures and the ACT scores were stronger, indicating that it is possible to identify relationships between print exposure and reading achievement in college student samples. ACT English scores were reliably correlated with the ART, MRT, and the CRH survey, and the ACT reading scores were correlated with the ART and the CRH survey. Thus, the print exposure and reading tasks that we developed and administered directly to participants did not reliably correlate with one another, although both our print exposure and sentence reading tasks reliably correlated with the ACT scores. Of course, it is impossible to interpret these null results with any certainty, but it is possible that the extensively normed ACT test provides a more robust and broader measure of multiple aspects of reading comprehension abilities that relate to narrower tests of print exposure and sentence reading abilities, even when these narrower tests do not exhibit reliable correlations with each other.

Table 2 presents the correlations between the individual items of the self-report questionnaires, the print exposure measures, and the reading skill measures. Clearly, many correlations are presented in this table, and instead of describing each one, we focus on some general patterns . The first notable pattern that emerges is that items within the CRH survey consistently correlate with the majority of the objective print exposure and reading measures, whereas the time estimate measures do not. The second general pattern is that, across individual items within the time estimates, there are dissociations in the correlations between academic and nonacademic reading times. Whereas academic reading and textbook reading are positively correlated with each other, textbook and fiction reading are negatively correlated. Beyond this specific negative correlation, other types of reading materials that one might argue are mostly nonacademic (e.g., magazines, newspapers, e-mail) are positively correlated with each other.

Factor Analysis

The pattern of correlations discussed above suggests that measures of print exposure relate to computer-based sentence reading and standardized measures of reading achievement in complex ways in this sample. In order to further explore these relationships and assess which measures have a tendency to group together, a factor analysis was performed. Table 4 provides the factor loadings of a principal components analysis after varimax rotation for the measures used in the present study. Three factors were extracted using both the Scree test (Cattell, 1966) and Kaiser’s rule of eigenvalues greater than 1. The combination of the three factors extracted accounted for 72.8% of the variance in the measures of participants’ reading performance and print exposure. Similar factor structures were attained when an oblique (oblimin) rotation was used and when the two self-reported time estimates were included in the factor analysis; neither of these alternate analyses is included in Table 4.

Table 4.

Principal Components Factor Analysis After Varimax Rotation

Factor
Variable 1 2 3
Word reading time −0.19 −0.08 0.91
Question accuracy 0.68 0.05 0.53
ACT English 0.80 0.23 −0.10
ACT Reading 0.84 0.13 −0.26
ART 0.16 0.84 −0.05
MRT 0.01 0.85 −0.12
CRH 0.37 0.60 0.10
Initial eigenvalues 2.68 1.30 1.12
Rotation sums of squared loadings 2.01 1.89 1.22
Cumulative % variance 38.3 56.8 72.8

Note—N = 78. Significant factor loadings are indicated in bold.

Although this factor analysis was exploratory in nature, it does tend to confirm the general patterns identified through the correlational analyses described above. First, it is clear that some of the measures clustered together according to the means by which they were collected. Both ACT measures clustered under the first factor, the measures of print exposure (ART, MRT, and CRH) clustered together under the second factor, and, despite comprehension’s being maximally loaded on the first factor, the computer-based measures of sentence comprehension also clustered together fairly well under the third factor. Thus, we began this investigation with three types of measures (achievement tests, computerized measures of sentence reading, and measures of print exposure), and the factor analysis largely reproduced this taxonomy. In sum, the factor analysis reflects the fact that there are a number of dimensions along which reading performance and habits can be measured, all of which seem to capture slightly different aspects of this multifaceted skill.

Regression Analyses

We next explored the role of print exposure through a series of hierarchical regression analyses examining the extent to which various factors together predict general reading performance in college students. We chose a composite of the two standardized measures (the average of the ACT English and Reading scores) as what was likely to be the most stable measures of participants’ achievements and abilities. A concern with this type of analysis for the present data is that some potential predicting factors are themselves intercorrelated (as is shown in Table 3), making it difficult to interpret results of a multiple regression. We sought to minimize these concerns by conducting a series of hierarchical regressions in which the order and measures entered into the regression model were varied. In addition, we created a composite objective measure of print exposure from the highly correlated ART and MRT scores; the composite was simply the sum of the two scores.

Each analysis was designed to answer a slightly different question. Ultimately, there were four potential predictors: sentence comprehension accuracy, word reading time, the CRH survey, and the ART/MRT composite.

Table 5 presents three hierarchical regressions. The first two regressions were designed to address how well reading time and print exposure predict ACT scores. Sentence comprehension accuracy was not included in these analyses because it loaded on the same factor as the ACT measures in the factor analysis, thus potentially serving as a suppressor to the other measures included in the regression analysis. The only difference between these first two regression models is the order in which the ART/MRT composite and CRH survey were entered, which was done to avoid the problem of suppressing relationships that might be present given the correlations between the print exposure measures. The first model reveals that the combination of word reading time, CRH, and the ART/ MRT composite accounts for 23% of the total variance in the ACT composite scores [F(3,74) = 7.24, p < .001]. Although both the word reading time [β = −.238; t(1,74) = −2.30, p < .05] and CRH survey [β = .305; t(1,74) = 2.71, p < .01] measures add unique variance to the model and remained significant predictors after the other variables were partialed out, the ART/MRT composite did not contribute significantly beyond the other two measures and was not a unique individual predictor of the ACT composite [β = .141; t(1,74) = 1.23, p > .05]. Unlike the first regression model, the second shows that when the ART/ MRT composite is entered before the CRH survey, it does contribute to a significant increase in the overall model fit [R2 change = .070; F(1,75) = 6.214, p < .05]. In addition, the ART/MRT composite was a unique predictor of the ACT composite before the CRH survey was entered (β = .270, t = 2.49, p < .05). As before, however, after the CRH survey was entered the unique predictability of the ART/MRT composite was reduced to nonsignificance. These regressions demonstrate that measures of print exposure and reading speed, when combined, account for a significant amount of variance in an individual’s performance on verbal achievement tests. The fact that the ART/MRT composite is no longer a significant individual predictor when the CRH survey is entered into the model suggests that the CRH survey not only accounts for the similar variance in the standardized measures, but contributes additional variance by virtue of the significance of the partial regression coefficient.

Table 5.

Hierarchical Regressions of Computerized Sentence Reading and Print Exposure Measures on the ACT English and Reading Composite

Model Step Measure R2 R2 Change Final β
1 1 Word reading time .08 .08 −.24
2 CRH .21 .13 .31
3 ART/MRT composite .23 .02 .14
2 1 Word reading time .08 .08 −.24
2 ART/MRT composite .15 .07 .14
3 CRH .23 .08 .31
3 1 Word reading time .08 .08 −.33
2 ART/MRT composite .15 .07 .12
3 CRH .23 .08 .22
4 Sentence comprehension .38 .15 .41

Note—N 5 78. “Step” indicates order of entry into hierarchical regression. Boldface indicates significance at p < .05.

The third and final hierarchical regression model addressed whether sentence comprehension accuracy contributes significant additional variance beyond the print exposure and reading speed measures. By entering this variable last in the regression, we also address whether the CRH and reading speed measures continue to be significant individual predictors of the ACT composite scores when a variable known to load on the same factor as this measure is included. The regression revealed that the four variables account for 38% of the variance in the ACT composite [F(4,77) = 11.19, p < .001] and that the addition of sentence comprehension accuracy added a significant amount of variance beyond the three variables previously included [R2 change = .153; F(1,73) 5 18.03, p < .001]. Finally, both the CRH and reading speed measures remained significant, individual predictors of the ACT composite even after common variance from the sentence comprehension measure was partialed out. The regression analyses thus serve to further bolster claims that reading speed, print exposure, and sentence comprehension accuracy all serve as unique aspects of reading skill that can be measured independent of each other, and that all contribute to performance on verbal achievement tests.

DISCUSSION

In the present study, we set out to address several related questions. The first was whether a relationship between print exposure and various aspects of reading skill exists in college students, a highly literate population that engages in extensive reading. Second, we investigated whether modifications to previous measures of print exposure could avoid some of the difficulties associated with measurement of this construct in the past. Finally, we investigated whether a relationship between print exposure and reading could be extended to measures of sentence comprehension in college students, a middle ground between the well-demonstrated relationships between print exposure and lexical processing on the one hand and high-level text comprehension processes on the other. Our results provided data for all three questions.

First, the data show that even among college students with generally above-average verbal ACT scores, and who as a group presumably read more than much of the general adult population, there is still a clear relationship between print exposure and reading-related achievement, as assessed by the verbal portions of the ACT. This result is consistent with previous research relating measures of print exposure to the verbal portion of the SAT (Stanovich, West, & Harrison, 1995). Importantly, these relationships were found not only with updated objective measures of print exposure (the ART and MRT), but also with one section of a newly created self-report measure, the CRH survey. The success of the CRH addresses our second question, concerning whether improved measures of print exposure can be developed. These data suggest that at least some comparative, Likert-scaled self-reports of reading habits may avoid problems previously associated with self-reports of print exposure. Moreover, they may be better equipped to capture a broader range of reading experiences (including electronic texts—e.g., e-mail, blogs, Web sites) than objective measures that are currently available, such as the ART and the MRT. There are several potential concerns with the CRH measure, however. First, it is not clear to what extent the CRH is subject to the criticism of socially desirable responding, such that respondents claim to be better than peers, in the same way that time estimates of reading have been criticized for allowing respondents to inflate their accomplishments. Although the present measures cannot definitively rule out this possibility, the CRH data do not suggest that the measure is subject to substantial amounts of inflation by respondents. That is, respondents overall do claim that they are slightly above average on the CRH measures in comparison to peers, but they also score slightly above the national average on the ACT verbal tests. Thus, given the inherent limitations of self-report data, the CRH appears to be a useful addition to other assessments of print exposure. Second, it is currently unknown whether comparative measures would be effective with other samples of readers. College students, given their frequent close contact with peers, may be better able to judge their comparative reading skill and efforts than would groups of people who are not attending college. Thus, although comparative assessments proved useful here, it is unclear whether they would provide effective assessments of print exposure in other groups, such as children or more heterogeneous groups of adults. Within the college student sample, however, the results of the present study seem likely to replicate in that the subcomponents of the CRH exhibit reliability (Cronbach’s α = .723 for five items).

Broadly speaking, our data show that all self-report measures of print exposure are not equally effective. Findings from the self-reported estimates of time spent reading and writing tend to validate Stanovich and West’s (1989) claims that time estimates are an unreliable measure of print exposure. These measures yielded few reliable correlations with reading performance and the objective measures of print exposure. Moreover, we found that people who reported more time spent reading actually were slower readers in the self-paced reading task than were those who reported less time spent reading. Although this result is in need of validation, it does point out an inherent problem with relating various reading abilities to reports of how much time people spend reading in the course of a week. Most work in print exposure has assumed that longer time spent reading results in greater print exposure, but it is logically possible that readers who accurately report large amounts of time spent reading form two diverse groups: Avid readers who do read more text than peers and consequently do have greater print exposure, and slow readers whose large devotion of time to reading does not result in high levels of print exposure. Yet another possibility is that different reading groups may have different degrees of distortion in their time estimates—for example, frequent readers may be more accurate in their estimates than sporadic readers. It is possible that combining time estimates with estimates of the number of pages individuals read would provide more stable assessments of print exposure, but page estimates are also likely to be an extremely noisy measure of the amount and complexity of the text read. Future studies should thus evaluate the extent to which these different means of assessing print exposure can predict both specific and more general reading abilities.

Our third general question was whether print exposure measures could be related to assessments of sentence-level reading processes in college students. These relationships appeared tenuous as best. Although the useful measures of print exposure that we identified (CRH, ART, and MRT) showed a clear relationship to students’ verbal ACT scores, the relationship between these measures and the computerized assessments of reading speed and sentence comprehension accuracy were not reliable. Since ACT scores and performance on the computer-based task were themselves correlated, one likely cause of the weak relationship between print exposure and the computer-based measures is the narrowness of, or noise in, the self-paced reading measures. Self-paced reading tasks are a common assessment of comprehension difficulty in studies that compare sentence types differing in complexity or ambiguity (Mitchell, 1994), and the measures appear to be robust enough for this purpose. Nevertheless, there are other hints in the literature that measures of reading based on isolated sentences do not always reflect differences in comprehension skill. For example, verbal working memory measures or other assessments of individual differences frequently are found to correlate only mildy with measures of sentence comprehension speed and accuracy (Waters & Caplan, 1996), even though the same working memory measures correlate well with broader assessments of verbal ability such as performance on the verbal portion of the SAT (Daneman & Carpenter, 1980). Standardized tests such as the ACT and SAT may have advantages over some laboratory-based measures as assessments of broad reading and verbal skill, because the tests themselves have breadth and are heavily normed, and because participants are highly motivated to perform to the best of their ability. Yet another explanation for the differences in correlations observed between the standardized and laboratory measures with print exposure measures is the existence of a mediating variable associated with ACT and print exposure measures, which is not present in self-paced reading. For instance, the acts of reading in the ACT and reading the material assessed in print exposure questionnaires are much more similar to each other than to the acts of reading individual words and reading sentences in self-paced reading paradigms. Thus, familiarity with the type of reading done in the ACT or the ability to integrate contextual information across sentences might serve to mediate the relationship between the ACT and the measures of print exposure. Such familiarity with the task demands and contextual integration would presumably not be present in self-paced reading.

In sum, this research has demonstrated a clear relationship between print exposure measures and performance on standardized tests of reading and verbal ability in college students. Given the restricted range of abilities and reading habits in college students relative to the population at large, the identification of a clear role for print exposure reaffirms the importance of this variable even at the upper end of the reading and performance distribution. This restricted range may have limited our ability to observe differences in sentence reading processes as a function of print exposure in this population, but the relationship between print exposure and ACT scores leaves open the possibility that relationships between print exposure and specific subcomponents of the reading process could be identified in college students with more robust reading measures. Moreover, this work has identified which types of assessments of print exposure appear to be most useful for this sample, and has developed and updated print exposure assessments that should prove useful in other investigations into the role of print exposure and reading.

Acknowledgments

This research was supported by NIMH Grant P50 MH644445, NICHD Grant R01 HD047425, and the Wisconsin Alumni Research Fund. Requests for the original tasks discussed in this article and other correspondence can be addressed to M. C. MacDonald, Department of Psychology, University of Wisconsin, Madison, WI 53706 (mcmacdonald@ wisc.edu).

REFERENCES

  1. Allen L, Cipielewski J, Stanovich KE. Multiple indicators of children’s reading habits and attitudes: Construct validity and cognitive correlates. Journal of Educational Psychology. 1992;84:489–503. [Google Scholar]
  2. Anderson RC, Wilson PT, Fielding LG. Growth in reading and how children spend their time outside of school. Reading Research Quarterly. 1988;23:285–303. [Google Scholar]
  3. Ang CH, Noble JP. Incremental validity of ACT assessment scores and high school course information for freshman course placement. ACT Research Report Series. 1993:93–95. Available at www.act.org/research/reports/index.html.
  4. Bandura A. Self-efficacy: The exercise of control. New York: Freeman; 1997. [Google Scholar]
  5. Beech JR. Individual differences in mature readers in reading, spelling, and grapheme-phoneme conversion. Current Psychology: Developmental, Learning, Personality, Social. 2002;21:121–132. [Google Scholar]
  6. Biber D. Spoken and written textual dimensions in English: Resolving the contradictory findings. Language. 1986;62:384–414. [Google Scholar]
  7. Cattell RB. The scree test for the number of factors. Multivariate Behavioral Research. 1966;1:245–276. doi: 10.1207/s15327906mbr0102_10. [DOI] [PubMed] [Google Scholar]
  8. Chateau D, Jared D. Exposure to print and word recognition process. Memory & Cognition. 2000;28:143–153. doi: 10.3758/bf03211582. [DOI] [PubMed] [Google Scholar]
  9. Cipielewski J, Stanovich KE. Predicting growth in reading ability from children’s exposure to print. Journal of Experimental Child Psychology. 1992;54:74–89. [Google Scholar]
  10. Cohen S. Effects of task, interval and order of presentation on time estimation. Perceptual & Motor Skills. 1971;33:101–102. doi: 10.2466/pms.1971.33.1.101. [DOI] [PubMed] [Google Scholar]
  11. Cunningham AE, Stanovich KE. Assessing print exposure and orthographic processing skill in children: A quick measure of reading experience. Journal of Educational Psychology. 1990;82:733–740. [Google Scholar]
  12. Cunningham AE, Stanovich KE. Assessing print exposure and orthographic processing in children: Associations with vocabulary, general knowledge and spelling. Journal of Educational Psychology. 1991;83:423–441. [Google Scholar]
  13. Daneman M, Carpenter PA. Individual differences in working memory and reading. Journal of Verbal Learning & Verbal Behaviour. 1980;19:450–466. [Google Scholar]
  14. Ennis PH. Adult book reading in the United States (National Opinion Research Center Report No. 105) Chicago: University of Chicago Press; 1965. [Google Scholar]
  15. Frijters JC, Barron RW, Brunello M. Direct and mediated influences of home literacy and literacy interest on prereaders’ oral vocabulary and early written language skill. Journal of Educational Psychology. 2000;92:466–477. [Google Scholar]
  16. Greaney V. Factors related to amount and type of leisure time reading. Reading Research Quarterly. 1980;15:337–357. [Google Scholar]
  17. Guay M. Long-term retention of temporal information. Perceptual & Motor Skills. 1982;54:843–849. doi: 10.2466/pms.1982.54.3.843. [DOI] [PubMed] [Google Scholar]
  18. Guthrie JT. Reading in New Zealand: Achievement and volume. Reading Research Quarterly. 1981;17:6–27. [Google Scholar]
  19. Just MA, Carpenter PA. A capacity theory of comprehension: Individual differences in working memory. Psychological Review. 1992;98:122–149. doi: 10.1037/0033-295x.99.1.122. [DOI] [PubMed] [Google Scholar]
  20. Just MA, Carpenter PA, Woolley JD. Paradigms and processes in reading comprehension. Journal of Experimental Psychology. 1982;111:228–238. doi: 10.1037//0096-3445.111.2.228. [DOI] [PubMed] [Google Scholar]
  21. Kennedy A, Murray WS. Inspection times for words in syntactically ambiguous sentences under three presentation conditions. Journal of Experimental Psychology. 1984;10:833–849. [Google Scholar]
  22. King J, Just MA. Individual differences in syntactic processing: The role of working memory. Journal of Memory & Language. 1991;30:580–602. [Google Scholar]
  23. Lewis R, Teale W. Another look at secondary school students’ attitudes toward reading. Journal of Reading Behavior. 1980;12:189–201. [Google Scholar]
  24. MacDonald MC, Christiansen MH. Reassessing working memory: A comment on Just & Carpenter (1992) and Waters & Caplan (1996) Psychological Review. 2002;109:35–54. doi: 10.1037/0033-295x.109.1.35. [DOI] [PubMed] [Google Scholar]
  25. McBride-Chang C, Manis FR, Seidenberg MS, Custodio RG, Doi LM. Print exposure as a predictor of word reading and reading comprehension in disabled and nondisabled readers. Journal of Educational Psychology. 1993;85:230–238. [Google Scholar]
  26. Mitchell D. Sentence parsing. In: Gernsbacher MA, editor. Handbook of psycholinguistics. New York: Academic Press; 1994. pp. 375–409. [Google Scholar]
  27. Noble J. Estimating reading skill from ACT achievement scores (ACT Research Report 88) 1988 Available at www.act.org/research/reports/index.html.
  28. Noble J. Predicting college grades from ACT assessment scores and high school coursework and grade information (ACT Research Report 91-3) 1991 Available at www.act.org/research/reports/index.html.
  29. Pearlmutter NJ, MacDonald MC. Individual differences and probabilistic constraints in syntactic ambiguity resolution. Journal of Memory & Language. 1995;34:521–542. [Google Scholar]
  30. Scribner S, Cole M. The psychology of literacy. Cambridge, MA: Harvard University Press; 1981. [Google Scholar]
  31. Sharon AT. What do adults read? Reading Research Quarterly. 1973–1974;9:148–169. [Google Scholar]
  32. Simon HA, Newell A. Thinking processes. In: Krantz DH, Atkinson RC, editors. Contemporary developments in mathematical psychology: I Learning, memory and thinking. Oxford: Freeman; 1974. pp. 101–144. [Google Scholar]
  33. Stanovich KE, Cunningham AE. Studying the consequences of literacy within a literate society: The cognitive correlates of print exposure. Memory & Cognition. 1992;20:51–68. doi: 10.3758/bf03208254. [DOI] [PubMed] [Google Scholar]
  34. Stanovich KE, West RF. Exposure to print and orthographic processing. Reading Research Quarterly. 1989;24:402–433. [Google Scholar]
  35. Stanovich KE, West RF, Harrison MR. Knowledge growth and maintenance across the life span: The role of print exposure. Developmental Psychology. 1995;31:811–826. [Google Scholar]
  36. Stiggins RJ, Schmeiser CB, Ferguson RL. Validity of the ACT assessment as an indicator of reading ability. Applied Psychological Measurement. 1978;2:337–344. [Google Scholar]
  37. Stringer R, Stanovich KE. The connection between reaction time and variation in reading ability: Unraveling covariance relationships with cognitive ability and phonological sensitivity. Scientific Studies of Reading. 2000;4:41–53. [Google Scholar]
  38. Waters GS, Caplan D. The measurement of verbal working memory capacity and its relation to reading comprehension. Quarterly Journal of Experimental Psychology. 1996;49A:51–79. doi: 10.1080/713755607. [DOI] [PubMed] [Google Scholar]
  39. West RF, Stanovich KE, Mitchell HR. Reading in the real world and its correlates. Reading Research Quarterly. 1993;28:35–50. [Google Scholar]
  40. Zill N, Winglee M. Who reads literature? The future of the United States as a nation of readers. Cabin John, MD: Seven Locks Press; 1990. [Google Scholar]

RESOURCES