Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2010 Apr 1.
Published in final edited form as: J Mem Lang. 2009 Apr 1;60(3):368–392. doi: 10.1016/j.jml.2008.09.005

Making simple sentences hard: Verb bias effects in simple direct object sentences

Michael P Wilson 1,*, Susan M Garnsey 1
PMCID: PMC2756706  NIHMSID: NIHMS102617  PMID: 20160997

Abstract

Constraint-based lexical models of language processing assume that readers resolve temporary ambiguities by relying on a variety of cues, including particular knowledge of how verbs combine with nouns. Previous experiments have demonstrated verb bias effects only in structurally complex sentences, and have been criticized on the grounds that such effects could be due to a rapid reanalysis stage in a two-stage modular processing system. In a self-paced reading experiment and an eyetracking experiment, we demonstrate verb bias effects in sentences with simple structures that should require no reanalyis, and thus provide evidence that the combinatorial properties of individual words influence the earliest stages of sentence comprehension.

Keywords: language, sentence comprehension, parsing, verb bias, garden-path sentences, eyetracking, self-paced reading


The apparent everyday ease of language comprehension conceals considerable complexity in the processes that are necessary to its success. Certain kinds of sentences, however, reveal the underlying complexity by leading to characteristic errors of interpretation called “garden paths.” “Garden-pathing” occurs when a sentence is misinterpreted at first and then must be reinterpreted. Since they require reanalysis, garden-path sentences generally take longer to understand than other sentences. One goal of language comprehension research is to provide explanations for why and how certain kinds of sentences lead to garden-pathing, and thus provide a fuller account of the human capacity for language.

Sentence (1a) below is an example of a sentence that is likely to garden-path most readers.

  • 1a. The professor read the newspaper had been destroyed.

  • 1b. The professor read the newspaper during his break.

The phrase the newspaper in (1a) is likely to be interpreted initially as the direct object of read, but the rest of sentence makes it clear that that interpretation is incorrect. Instead, newspaper in (1a) turns out to be the subject of an embedded clause. Thus, the role of the phrase the newspaper is temporarily ambiguous in sentences like (1), since it could turn out to be either the direct object of read as in (1b), or the subject of an embedded clause as in (1a). Most readers would probably make the direct object interpretation of the newspaper at first in these examples, and thus be surprised when the word had disambiguates the sentence in the unexpected direction in (1a). Potential mistakes could easily be avoided, of course, if people waited until they reached the word had in (1a) or during in (1b) before deciding how newspaper should be integrated into the sentence. However, considerable evidence indicates that people generally do not wait. Instead, as soon as they recognize each word, they try to integrate it into a continually evolving interpretation of the sentence. Since there are many kinds of sentences containing temporary ambiguities, people are often faced with uncertainty about how to integrate incoming words into the sentence structure. In (1a), a decision must be made about whether or not the newspaper is the thing being read. How and when people make such decisions, and what kinds of information influence those decisions, has been the focus of intense debate in psycholinguistics.

Broadly speaking, two classes of comprehension theories have been developed. Modular two-stage theories (e.g., Frazier, 1990) argue that comprehension is made manageable by first trying to resolve ambiguities in the simplest possible way. According to the Minimal Attachment Principle of the Garden Path Model (Frazier & Fodor, 1978), for example, nouns following verbs are first interpreted as direct objects because that is the simplest alternative. If wrong, this interpretation is then revised at a second reanalysis stage. Simpler sentence structures are thought to be easier and thus faster for comprehenders to construct, so a system that always pursues those first could be especially efficient. Of course, the relative efficiency of a system that works in this way depends on how often the initial choices are wrong, as well as the cost of revising them when they are.

In the second type of comprehension model, multiple factors continuously contribute to and constrain decisions about ambiguities throughout comprehension. Such models have been most fully developed in the constraint satisfaction framework (e.g., MacDonald, Pearlmutter, & Seidenberg, 1994; Trueswell & Tanenhaus, 1994). Such interactive models generally incorporate some limited capacity to pursue multiple interpretations in parallel, each weighted according to the evidence in its favor. The likelihood that a verb will be followed by a particular type of sentence structure constitutes one important source of evidence in most such models, leading to the use of the label “constraint-based lexical models” (Trueswell & Tanenhaus, 1994). Knowledge tied to verbs is particularly influential because verbs tend to place strong constraints on how the other words in a sentence can combine. For example, the verb read is one that can be followed by simple direct objects, or more complex embedded clauses, or various kinds of prepositional phrases, but it is most likely to be followed by a direct object (i.e., it has a “Direct-object bias”). Thus, comprehenders might find a sentence like example (1a), in which read is followed by an embedded clause, particularly difficult because it violates their verb-based expectation for a direct object. Thus, for sentences like example (1a), both kinds of models make the same predictions, but for partially different reasons.

Proponents of both kinds of models find supporting evidence in the literature, in part because it has proved to be quite difficult to clearly distinguish the initial decisions of the parser from subsequent reanalysis stages when initial decisions are incorrect. Evidence for the initial effects of multiple types of information, which some theorists have taken as support for an interactive system, has been interpreted by modular two-stage theorists as reflecting reanalysis. This impasse has arisen in part because most research so far has been restricted to testing the comprehension of temporarily ambiguous sentences that turn out to have the more complex of their possible structures. Various relevant factors have been found to reliably ease the interpretation of such sentences, but it remains possible that such effects are due to reanalysis. Frazier (1995) has argued that it would be more convincing to show that the comprehension of simple structures is also affected by the same kinds of cues. Since sentences with simple structures should never require reanalysis according to the Garden Path Model, their comprehension should not be affected by the kinds of cues that have been shown to affect more complex structures. As Frazier (1995) notes:

“It may be significant that garden paths have never been convincingly demonstrated in the processing of analysis A (the structurally simplest one)... In such cases, if analysis A ultimately proves to be correct, perceivers should show evidence of having been garden-pathed by a syntactically more complex analysis even though the syntactically simpler analysis is correct.” (p. 441)

The studies reported here follow the strategy suggested by Frazier, and examine the effect of one particular kind of cue on the comprehension of one particular kind of temporarily ambiguous sentence. The sentences contain a temporary ambiguity about whether a noun phrase immediately following a verb is its direct object or the subject of an embedded clause, as illustrated earlier in example (1). The relevant cue that is manipulated is the relative likelihood of the main verb being used in sentences with different kinds of structures (called “verb bias” here). Our studies include sentences that are resolved in the simplest possible way, as well as ones resolved in the more complex way.

The goal of the studies reported here is to determine whether verb bias influences the earliest stages of sentence processing by investigating its effects in temporarily ambiguous sentences that turn out to have the simpler of their possible structures. Since such sentences should never require reanalysis, evidence that verb bias affects their comprehension would provide strong evidence that the earliest stages of sentence processing are influenced by information tied to specific words, as predicted by constraint-based lexical models.

Method

Participants

63 native English speakers (23 males, 55 right-handed, mean age 21) participated for either partial course credit or a minimal sum. Due to experimenter error, only 54 participants’ data were available for analysis.

Materials

Verb bias was assessed in a norming study in which a group of 108 participants wrote sentence continuations for proper name + verb fragments. The norming study is described in more detail in Garnsey et al. (1997). Continuations were hand coded for each verb for the number of 1) direct objects, 2) that-clauses, and 3) other types of sentence continuations. Verbs were classified as Direct-object bias if they were used at least twice as often with a direct object as with an embedded clause in the norming study, and as Clause-bias if they were used at least twice as often with an embedded clause as with a direct object. Twenty verbs fitting these criteria were chosen for each verb type, with most of the verbs being used far more frequently with their preferred continuation. (See Table 1; biases for individual verbs are given in Appendix 2.) The two sets of verbs were matched in length and approximately matched in frequency, on average.

Table 1.

Verb Properties

Mean DO-bias strength of verb1 Mean Clause-bias strength1 Mean frequency of verb2 Mean length of verb Mean frequency (mean summed length) of 2-word disambig region for clause continuations Mean frequency2 (mean summed length) of 2-word disambig region for direct object continuations Mean frequency (mean summed length) of first disambig word for clause continuations Mean frequency (mean summed length) of first disambig word for direct object continuations
Direct object bias Verbs 76% 13% 190 7.9 7117 (9.0) 10628 (9.8) 1287 (5.7) 968 (5.5)
Clause bias Verbs 11% 58% 108 7.6 6834 (9.5) 10758 (9.3) 852 (5.8) 834 (5.8)
1

Bias was calculated as the percentage of sentences with a particular structure, out of all sentences containing the verb.

For each verb, three sentence versions were constructed containing the same subject noun and post-verbal noun. One version had a direct-object continuation (as shown in 2b and 3b below), and the other two both had embedded clause continuations (as shown in 2a and 3a below). One of the embedded clause versions was temporarily ambiguous, while the other was made unambiguous by including the complementizer that. Thirty plausible noun + verb + noun combinations were taken directly from items used by Garnsey et al. (1997), but new sentence continuations were written for them to allow better matching of the disambiguating regions across direct object and embedded clause continuations. Another 48 sentences were newly created. For each of 38 verbs, two item sets with different subject and post-verbal nouns were created, and for two additional verbs a single item set was created, thus yielding 78 experimental item sets overall. (There was only one item set each for two of the verbs because counterbalancing the design required the total number of item sets to be a multiple of three.) In all items, the temporary ambiguity about whether the second noun phrase in the sentence was a direct object of the main verb or the subject of an embedded clause was always immediately disambiguated by either a verb in the embedded clause versions (e.g., might in 2a and could in 3a) or a conjunction or preposition in the direct object versions (e.g., because in 2b and when in 3b). Since it is impossible to create a grammatical English sentence where the complementizer that is followed by a direct object, the design could not be completely crossed. Half of the items contained Direct-object bias verbs as in (3) and half contained Clause-bias verbs as in (2). All experimental sentences contained 11 words in their ambiguous versions and 12 words in their unambiguous versions.

Clause-bias verb

  • 2a. The ticket agent admitted (that) the mistake might not have been caught.

  • 2b. The ticket agent admitted the mistake because she had been caught.

Direct-Object bias verb

  • 3a. The CIA director confirmed (that) the rumor could mean a security leak.

  • 3b. The CIA director confirmed the rumor when he testified before Congress.

The temporarily ambiguous nouns were all plausible as direct objects of the verbs they followed (with the exception of two items for which a less plausible direct object was used in error in Experiment 1; both of these items had Direct-object bias verbs and were dropped from all analyses). Plausibility was assessed in two kinds of norming studies. In one, people rated the plausibility of sentences like “The ticket agent admitted the mistake.” on a 7-point scale (7 = highly plausible), where a period following the post-verbal noun made it clear that it was a direct object. Mean plausibility ratings were 5.8 for items with Clause-bias verbs and 6.6 for items with Direct-object bias verbs. In the second kind of norming study, people rated fragments like “The ticket agent admitted that the mistake …” as the beginnings of sentences, to evaluate the plausibility of the embedded clause interpretations of the items. Mean plausibility when such fragments contained Clause-bias verbs was 6.4, and 6.0 when they contained Direct-object bias verbs.

Notice that there was a trade-off in the two kinds of plausibility ratings, such that plausibility was rated slightly higher whenever the sentence structure matched the verb’s bias. It should not be surprising that people rate sentences as more plausible when the verb is used in the way it most often is, and a similar pattern was observed by Garnsey et al. (1997) for a partially overlapping set of items. The ratings show that the materials met the goal of using nouns that were plausible both as direct objects and as subjects of embedded clauses, but they highlight the difficulty of evaluating plausibility. It is not clear that matching plausibility ratings even more closely across verb types would have the desired effect of more closely equating the plausibility of sentences with the two kinds of verbs. That is, higher ratings for direct objects following Clause-bias verbs might require using verb/noun pairings that are actually even more plausible than the ones that produce the same ratings for Direct-object bias verbs. We concluded that our materials were sufficiently matched in plausibility, given that Garnsey et al. (1997) found no early effects of far larger plausibility differences (3.6 on the same rating scale, compared to 0.8 here) on initial reading times when verbs were strongly biased, as they are here. However, to rule out the possibility that any observed reading time differences might be due to differences in the plausibility of the temporarily ambiguous noun as a direct object, rather than or in addition to verb bias, the results were analyzed for both the full set of items and for a subset that equated plausibility ratings across verb bias. In the set that equated rated plausibility, the 11 highest-rated items with Clause-bias verbs and the 11 lowest-rated items with Direct-object bias verbs were dropped, leaving 56 items over which the average rating was 6.4 for both verb types. The items that were dropped in this analysis are indicated in Appendix 1.

The two words following the post-verbal noun, which are underlined in (2) and (3), are called the “disambiguating region” here. It was not possible to closely match the frequency of the two words in the disambiguating region across the two types of sentence continuations while also satisfying other constraints on stimulus naturalness and plausibility. On average, the two words in the disambiguating region of direct object continuations were more frequent than those in the embedded clause continuations. However, this is not particularly problematic because our most important predictions concern the effect of the preceding main verb on reading times for the same type of continuation, and within continuation type, the frequencies of the disambiguating words were well-matched for the two types of verbs.

However, within continuation type there were small but potentially problematic length differences. The two-word disambiguation toward a clause continuation was 9.5 characters long on average after a Clause-bias verb, while it was 9.0 characters long on average after a Direct-object bias verb. The opposite pattern was true for direct object continuations, which were 9.8 characters long after Direct-object bias verbs and 9.3 characters long on average after Clause-bias verbs. Thus, in both cases the two-word disambiguating region tended to be a half character longer, on average, in continuations that matched verb bias. These differences arose because the length of the first but not the second word in the two-word disambiguating region was carefully controlled during stimulus construction. In spite of this, we decided to group the two words into a single analysis region to increase the stability of the results at this critical region, since it is well-known that short, high-frequency words (like most of the two words in the disambiguating region in these sentences) tend to be read very quickly and even skipped fairly often in free reading. Such words often fail to show reliable differences in reading time between conditions even when longer words immediately before and after do. Although a two-word disambiguating region was used, the analysis took a two-pronged approach to mitigate the possibility that length differences might obscure effects of verb bias at the disambiguation. First, as is customary, we corrected reading times for length differences, using procedures described below in the results section for Experiment 1. Second, to rule out the possibility that the length-correction procedure might distort results at the crucial disambiguating region, we also analyzed uncorrected reading times for just the first disambiguating word, since its length was matched better across conditions. If the pattern of uncorrected condition means for the first word is similar to that for the length-corrected two-word region, we can be assured that the pattern is not an artifact of the length-correction procedure.

The 78 experimental item sets were counterbalanced over three lists such that participants saw only one version from each item set and saw equal numbers of items in each condition. Each verb was repeated twice during a list, appearing once with a direct object continuation and once with a clause continuation, with different subject and post-verbal nouns each time, with the exceptions that two verbs only appeared once per list, as described above, and that one verb, asserted, was followed by the same noun, belief, in both of its items by mistake. In the latter case, the rest of the two asserted sentences were different. The experimental items were presented intermixed with 119 distractors and preceded by 10 practice items for a total of 207 trials. There were six types of distractors, including two reduced that-clauses, 23 conjoined clauses, 27 sentences with subject relative clauses, 24 sentences with infinitive clauses, 20 sentences with transitively biased verbs used in simple transitive sentences such as The politician ignored the scandal in his brief speech, and 23 sentences with intransitively biased verbs used in simple intransitive sentences such as The physical act of respiration begins with the diaphragm. The distractors varied in length from 7 to 14 words.

Each sentence in the experiment was followed by a yes/no comprehension question. Since the meaning of the sentences differed between the direct object and clause continuations, the comprehension question sometimes differed across versions in order to fit the sentence. Items were pseudo-randomly ordered with the constraints that 1) the first two sentences in the list were distractor items; 2) there were no more than two experimental items in a row; and 3) there were no more than 4 trials in a row with the same answer to the post-sentence question. Trials containing the same verb were as widely spaced as possible, with a minimum of 25 trials between the two repetitions of the same verb. Each list was presented in the same order, and each participant saw only one list.

Procedure

Participants read 207 sentences in a word-by-word self-paced moving-window paradigm. Sentences were displayed on a computer screen using the MEL (Microcomputer Experimental Laboratory) software package. Each sentence was followed by a yes/no comprehension question that was answered by pushing a key on a computer keyboard.

Each trial began with dashes on the screen in place of all non-space characters in the sentence. After the first spacebar push, the trial number was displayed at the left side of the screen. With each successive spacebar push, the next word appeared and the previous word reverted to dashes. All sentences fit on exactly one line of the computer screen. Participants were instructed to use whichever hand they felt most comfortable using to push the spacebar and answer the questions. The experiment lasted about 35 minutes.

Results

Two items were dropped from further analysis when errors in them were discovered (indicated in Appendix 1). Both of the dropped items contained Direct-object bias verbs.

Accuracy in answering the comprehension questions was generally high, with 91% correct performance overall. However, questions following sentences containing Direct-object bias verbs were answered slightly less accurately (89%) than those following sentences containing Clause-bias verbs (92%), reliable only by participants (F1(1,53) = 11.7, p < .01; F2(1,74) = 2.3, p >.1; MinF′(1,105) = 1.9, p>.1). This difference came almost entirely from sentences with direct object continuations, which were responded to 8% less accurately when they contained Direct-object bias verbs than when they contained Clause-bias verbs, exceeding the 95% confidence interval of 4% on the difference by both participants and items. Because only one sentence type showed such a difference, there was an interaction between verb bias and sentence structure in the omnibus ANOVA, but only by participants (F1(2,106) = 8.4, p < .01; F2(2,148) = 2.8, p > .1; MinF′(2,121) = 2.1, p > .1). There was no main effect of sentence structure on accuracy.

Trials on which there was an incorrect response to the comprehension question were omitted from analyses of the reading time data, resulting in an overall loss of 9% of the data. The primary analyses reported here are based on reading times corrected for length differences (Ferreira & Clifton, 1986). A regression equation characterizing the relationship between region length and reading time was calculated separately for each participant, based on the experimental items only. (Length correction was also calculated using all items including distractors, which produced the same pattern of results.) Expected reading times were calculated for each region length for each participant based on the regression equations, and then subtracted from the actual reading times to yield length-corrected residual reading times. The length-corrected data were then trimmed at three standard deviations from the overall mean across conditions for a particular participant in a particular region, such that values exceeding the cutoff were replaced with the value at three standard deviations. Approximately 1.4% of the data were replaced in this way. Analyses were then performed separately for each of the sentence regions indicated below in (4) with slashes separating the regions. The critical disambiguating region is underlined.

  • 4. The ticket agent/admitted/that/the mistake/might not/have been caught.

Since the design was necessarily incompletely crossed, two separate analyses of variance (ANOVAs) were conducted on partially overlapping subsets of the data, for both question response accuracy and reading time measures. First, a 2 (verb bias) × 2 (ambiguity: with or without that) ANOVA tested the effect of verb bias in sentences with embedded clauses. (The label ANOVA1 is used in tables throughout the paper for this analysis). This analysis omitted the conditions with direct object continuations and specifically tested whether the results replicated previous studies showing an effect of verb bias in sentences with embedded clause continuations. A second 2 (verb bias) × 2 (sentence continuation type) ANOVA tested the effect of verb bias in temporarily ambiguous sentences with the two different kinds of continuations (labeled ANOVA2 in tables throughout.). This analysis omitted unambiguous sentences and compared only the temporarily ambiguous conditions. All analyses were conducted by both participants and items, and MinF’s were also computed (Clark, 1973). For the most part, ANOVA results will be reported in tables (with the occasional isolated result reported in the text instead). 95% confidence intervals were calculated for pairwise contrasts of interest (Masson & Loftus, 2003). Mean length-corrected residual reading times are shown for all conditions in all regions in Table 2 below, and the results for just the disambiguating region are also shown in Figures 1a and 1b. Results of both ANOVAs at both the temporarily ambiguous noun phrase and the disambiguating verb region are shown in Table 3.

Table 2.

Mean residual reading times in Experiment 1 (Msec)

Subject NP admitted/confirmed (that) Temp. Ambig. NP Disambig. Rest of sentence
Clause-bias verb + that + Clause 19 59 23 −5 15 −69
Clause-bias verb + Clause 19 50 - 27 23 −64
Clause-bias verb + DO 21 47 - 28 29 −62
DO-bias verb + that + Clause 9 65 26 3 14 −70
DO-bias verb + Clause 17 68 - 17 46 −61
DO-bias verb + DO 20 65 - 21 15 −60
1

Bias was calculated as the percentage of sentences with a particular structure, out of all sentences containing the verb.

Figure 1.

Figure 1

Residual reading times at the disambiguating region in Experiment 1

Table 3.

Analysis of variance results at the temporarily ambiguous NP and the disambiguating verb region for length-corrected reading times in Experiment 1

By participants By items MinF′
Source F1 df1 F2 df2 MinF′ df
Temporarily Ambiguous NP ANOVA1 Verb 3.1! 1,53 2.4 1,74 1.3 1,127
Ambiguity <1 1,53 <1 1,74 <1
V × A <1 1,53 <1 1,74 <1
ANOVA2 Verb <1 1,53 <1 1,74 <1
Structure 30.4** 1,53 36.5** 1,74 16.6** 1,119
V × S 4.3* 1,53 4.0* 1,74 2.1 1,125
Disambig ANOVA1 Verb <1 1,53 <1 1,74 <1
Ambiguity 9.9** 1,53 7.1** 1,74 4.1* 1,127
V × A 19.4** 1,53 14.3** 1,74 8.3** 1,127
ANOVA2 Verb 11.3** 1,53 3.7! 1,74 2.8! 1,113
Structure 16.8** 1,53 26.9** 1,74 10.4** 1,109
V × S 6.0* 1,53 7.8** 1,74 3.4! 1,117

Note. ANOVA1 includes only temporarily ambiguous and unambiguous sentences with Clause continuations. (Clause-bias examples: “The ticket agent admitted the mistake might not have been caught” vs. “The ticket agent admitted that the mistake might not have been caught”). ANOVA2 includes only temporarily ambiguous sentences with either Clause or DO continuations. (Clause-bias examples: “The ticket agent admitted the mistake might not have been caught” vs. “The ticket agent admitted the mistake because she had been caught”).

*

p < .05;

**

p < .01;

!

p < .10

Disambiguating region

Since the hypotheses driving this study primarily concern what happens at the disambiguating region, results from that region will be described first. As noted above in Table 2, the disambiguating region was read 21 msec more slowly overall in temporarily ambiguous sentences without that. However, this ambiguity effect was 24 msec larger in sentences with Direct-object bias verbs than in sentences with Clause-bias verbs, leading to a reliable interaction between verb bias and ambiguity. Reading times were also 12 msec slower overall in sentences with Direct-object bias verbs than with Clause-bias verbs, resulting in a main effect of verb bias by participants. This pattern of results replicates previous studies finding that people had disproportionately more difficulty when a Direct-object bias verb was followed by a clause without the complementizer that (e.g., Garnsey et al., 1997; Trueswell et al., 1993) than when a Clause-bias verb was.

The central focus of the current study, however, is what happens in sentences with direct object continuations, which was assessed in a second ANOVA testing the effect of verb bias in temporarily ambiguous sentences with both kinds of continuations (i.e., direct objects and embedded clauses), illustrated in Figure 1b and summarized in Table 3 as ANOVA2. As noted in the table, the disambiguating region was read 13 msec more slowly overall in embedded clause continuations than in direct object continuations, producing a reliable main effect of continuation type. However, as Figure 1b shows, this effect came entirely from sentences with Direct-object bias verbs. In sentences with Clause-bias verbs, there was actually a small difference in the opposite direction. This pattern resulted in a reliable interaction between sentence continuation type and verb bias. There was no main effect of verb bias in this analysis.

The nature of the interaction between verb bias and sentence continuation type illustrated in Figure 1b suggests that reading times were longer whenever there was a mismatch between verb bias and sentence continuation, and this was confirmed in pairwise comparisons. Embedded clause continuations were read significantly more slowly after Direct-object bias verbs than after Clause-bias verbs (mean difference = 22.9 msec, 95% CI = 9.8 to 36.1). The reverse was also true: direct object continuations were read significantly more slowly after Clause-bias verbs than after Direct-object bias verbs (mean difference = −14.5 msec, 95% CI = −27 to −2.1). It is worth pointing out here that despite the greater conceptual and structural complexity of embedded clauses, Clause continuations were actually read slightly faster (mean difference = 6 msec, 95% CI = −4.1 to 15.9) than direct object continuations after Clause-bias verbs, although this difference did not approach reliability.

In the Materials section above, we described two ways in which the sentences were not matched across verb type as well as would be ideal, and described two additional analyses that would be done to help rule out alternative interpretations of the results based on small but possibly important mismatches in sentence properties. In the first of those additional analyses, we analyzed uncorrected reading times for just the first disambiguating word, since the length of the first disambiguating word was matched better across sentences with the two kinds of verbs than the two-word disambiguating region was. (See Table 1 above for length and frequency information about the first disambiguating word, and Table 8 in Appendix 3 for uncorrected and untrimmed reading times for all sentence regions. Note, however, that the disambiguating region whose uncorrected, untrimmed reading times are presented in Table 8 is the two-word region.)

The pattern of results for the first disambiguating word was similar to that for the corrected two-word disambiguating region, though weaker. These results are listed in Appendix 3. In general, the numeric pattern across condition means was similar for uncorrected times at the first disambiguating word and corrected times on the two-word disambiguating region, but the differences were smaller and less reliable in the single-word analysis. Recall that the effect of verb bias was predicted to be smaller in sentences with direct object continuations than in those with clause continuations, so it is not surprising that it would fail to reach reliability in an analysis of a less stable single-word measure on a short, high-frequency function word. Most crucially, uncorrected reading times on just the first disambiguating word in direct object continuations were numerically longer following Clause-bias verbs than following Direct-object bias verbs, so the similar but statistically reliable pattern observed for the length-corrected reading times on the two-word disambiguating region cannot be an artifact of the length-correction procedure.

In the second additional analysis done to rule out alternative explanations of the results, only items that equated plausibility ratings across verb type were analyzed. Again the pattern of results was similar. These results are also listed in Appendix 3. Crucially, it cannot be the case that small overall plausibility differences between sentences with the two types of verbs were responsible for the pattern of results at the disambiguation in the full set of items, since the pattern stayed the same when only items equating plausibility ratings across verb type were included.

The temporarily ambiguous noun phrase

There was a main effect of ambiguity at the noun phrase, which was read 23 msec more slowly in ambiguous sentences than in unambiguous ones. Although there was no main effect of the verb’s bias at the noun phrase, there was a reliable interaction between verb bias and ambiguity such that the inclusion of that decreased reading times on the noun phrase by 17 msec more in sentences with Clause-bias verbs than it did in sentences with Direct-object bias verbs (95% CI = 0.5 to 33.0). Although the word that itself was read almost equally quickly following both kinds of verbs (26 vs. 23 msec; Fs < 1), the interaction at the noun phrase after it suggests that the mismatch between a verb that predicts a direct object and a complementizer which clearly contradicts that prediction may have slowed readers when they reached the following noun phrase in sentences with Direct-object bias verbs.

Other sentence regions

Only two other sentence regions showed any reliable differences. At the main verb itself, Direct-object bias verbs were read 17 msec slower than Clause-bias verbs, leading to a main effect of verb bias in the analysis that included only the temporarily ambiguous conditions (F1(1,53) = 7.0, p < .05; F2(1,74) = 5.7, p <.05; MinF′(1,127) = 3.1, p < .1). A similar numeric pattern was also reported by Garnsey et al. (1997) for a partially overlapping set of materials. In both sets of materials, Direct-object bias verbs were read more slowly in spite of the fact that they were actually higher frequency than the Clause-bias verbs.

In the analysis that included only clause continuations, the sentence-final region was read 6 msec more slowly in ambiguous than in unambiguous sentences, which led to a main effect of ambiguity that was reliable only by participants (F1(1,53) = 7.4, p < .05; F2(1,74) = 2.9, p < .1; MinF′(1,118) = 2.1, p > .1).

Correlations between reading times and verb properties

A comparison of the temporarily ambiguous and unambiguous versions of sentences with embedded clause continuations showed that readers had more difficulty at the disambiguation in the ambiguous versions when the verb had a stronger Direct-object bias (r = .30, p < .01) and when it had a stronger that-preference (r = .34, p < .01), and less difficulty when the verb had a stronger Clause-bias (r = −.25, p < .05). The same pattern was observed in a comparison of the ambiguous sentences with the two kinds of continuations, where the disambiguating regions in embedded clause continuations were read more slowly as both the verb’s Direct-object bias and its that-preference increased (Direct-object bias: r = .40, p < .01, that-preference: r = .46, p < .01) and more quickly as the verb’s Clause-bias increased (r = −.28, p < .05).

The complementizer that in the unambiguous sentences was read more quickly as the overall frequency of the verb preceding it increased (r = −.34, p < .05). This may be a kind of spillover effect often seen in reading time data, i.e., difficulty in reading a less familiar word often spills over onto the word following it as well.

Discussion

Prior experience with the relative likelihood of the different argument structures that are possible for a verb guided readers’ interpretation of sentences containing those verbs. The new and most important finding here was that verb bias affected reading times in simple direct object continuations. Pairwise comparisons in the main analysis using a two-word disambiguating region and an item analysis demonstrated that when readers were led by verb bias to expect an embedded clause but got a direct object instead, they slowed down reliably compared to when the verb had predicted a direct object all along. In followup analyses using either just the first disambiguating word or a subset of items closely matched in rated plausibility, a similar pattern was observed, though since effects were weaker from a reduced number of items, effects did not always reach reliability in the single-word analysis. Consistent with previous studies, however, readers also used their prior experience with verbs to help them understand embedded clauses. As expected, readers had very little trouble at the disambiguating region of sentences that contained the complementizer that, regardless of verb bias, since this word strongly predicts an upcoming clause. If the sentence did not contain that, however, readers were helped if the sentence contained a Clause-bias verb and hindered if the sentence contained a Direct-object bias verb. The finding that comprehension of clauses is aided by a verb that strongly predicts an embedded clause replicates earlier findings by Trueswell et al. (1993) and Garnsey et al. (1997).

Finding verb bias effects in sentences with direct object continuations provides strong evidence that lexical properties such as verb bias guide the earliest parsing decisions in temporary ambiguities. If the comprehension system initially analyzed every noun following a verb as a direct object, then direct object continuations should be read equally quickly after Clause-bias verbs and Direct-object bias verbs, since no revision should be required in either case. The results reported here are not consistent with that prediction. The speed-accuracy tradeoff obtained in comprehension questions for Direct-object bias verbs followed by direct object continuations (ie, these continuations were read more quickly but the subsequent comprehension questions were answered more inaccurately) provides further indirect evidence on this point. Previous authors have noted that although comprehension processes are by and large “automatic, highly overlearned mental procedures,” this does not mean that they are error-free (McElree & Nordlie, 1999, pg. 486). If in fact a Clause-bias verb followed by a direct object causes a reader to devote more mental resources for a presumed upcoming complex clause than while reading a Direct-object bias verb followed by a direct object, then the more complex sentences may counterintuitively be answered more accurately. This is especially true if participants are not fully engaged during every sentence of the study. This evidence is of course largely indirect, since it occurs after the complete sentence has already been displayed. It also does not distinguish between classes of theories, since a serial model would also presumably predict that Direct-object bias verbs followed by direct object continuations should be answered more accurately. Whatever the source, however, a speed-accuracy tradeoff should not affect the results of the experiment above since all incorrect sentences were discarded from analysis in these results.

Finally, previous studies of similar sentences have found reading times to be correlated with properties of the verbs in the sentences (e.g., Garnsey et al., 1997; Trueswell et al., 1993), and the same was true here. In addition to Direct-object bias and Clause-bias strength, another verb property that has sometimes been found to be correlated with reading times is that-preference. A verb’s that-preference was calculated as the percentage of all sentences in the norming study containing that verb followed by an embedded clause that explicitly included the complementizer that. That-preference is typically positively correlated with Direct-object bias (r = .58 here), since the more likely a verb is to be followed by direct objects, the more likely it is that that will be included when it is instead followed by an embedded clause. Conversely, the more likely a verb is to be followed by embedded clauses, the less likely it is that that will be included, so that-preference is generally negatively correlated with Clause-bias (r = −.54 here). Of course, Direct-object bias and Clause-bias are also inevitably negatively correlated with each other (r = −.73 here). We calculated only simple correlations between each of these three verb properties and reading times because we were interested in using graded effects of verb properties to help diagnose how quickly knowledge about the verbs began to influence reading times, rather than in trying to determine the independent contributions of each verb property. As noted above, readers had a more difficult time in sentences without that which continued as clauses when the verb had a stronger Direct-object bias preference or a stronger preference for being followed by that.

The general picture that emerges from Experiment 1, therefore, is that prior experience with verbs guided readers’ interpretation of all sentences. This is suggested by the relative difficulty experienced by readers at a disambiguation that was not congruent with their verb-based expectations, and the relative ease at a disambiguation that was consistent with those expectations. However, a note of caution is in order. Experiment 1 used a self-paced moving window reading paradigm, which is unlike normal reading in several respects. In part because readers cannot go back and re-read earlier sentence regions when they encounter difficulty, self-paced reading times are probably influenced by both initial processing difficulty and the work done to recover from that difficulty. In order to obtain a clearer picture of what readers do when they are free to read more normally, Experiment 2 replicated Experiment 1 using eyetracking.

Experiment 2

The results of Experiment 1 supported the claim that readers use their prior experience with verbs in order to constrain their interpretations of sentences. However, participants in Experiment 1 were tested using self-paced reading, in which participants only see one word at a time. Thus, the purpose of the second experiment was to replicate the findings of Experiment 1 under conditions that more closely approximated natural reading. Accordingly, the stimuli from Experiment 1 were tested using an eyetracker.

Method

Participants

84 participants (37 males, 70 right-handed, mean age 21) participated for a minimal sum. All had normal uncorrected vision. All were native speakers of English except for one, who was excluded from further analyses. An additional 8 participants were dropped for reasons described below, leaving 75 participants whose results are reported here.

Procedure

The stimuli and procedures in Experiment 2 were identical in all respects to Experiment 1, except that participants’ eye movements were monitored using a Fifth Generation Dual Purkinje Eyetracker interfaced with an IBM-compatible PC. Stimuli were presented on a black and white ViewSonic monitor, at a font size equivalent to four characters per degree of visual angle. Although viewing of stimuli was binocular, only the right eye was monitored. Horizontal and vertical eye positions were sampled once per millisecond, and head position was held constant by a bite bar prepared for every participant. The eyetracker was calibrated for each participant at the beginning of the experimental session and after every break.

A trial began with the trial number displayed at the left side of the screen. Once the participant fixated the trial number and pushed a button, the trial began. An entire sentence was presented on one line of the monitor, and participants were instructed to read the sentence as normally as possible. Once finished, participants were instructed to look at a box on the right side of the screen and push the “yes” button to continue to the comprehension question. As in Experiment 1, each question was a “yes/no” comprehension question that participants answered by pushing buttons held in either hand.

Stimuli

The stimuli were identical to Experiment 1, except that two item errors were corrected.

Reading Time Measures

Reading patterns were examined using several different measures, including first-pass reading time, proportion of trials on which the first pass ended with a regressive eye movement, regression-path reading time (also sometimes called “right-bounded time”), and total time. First-pass reading time for a region summed all of the fixations made on that region before the eyes moved to any other region of the sentence. First-pass times for a region were included in the analyses reported below only if the participant looked at that region before fixating any region further to the right, in order to prevent contamination of the first-pass measure by trials where participants already knew from looking ahead what the eventual sentence continuation would be. For total times, all fixations on a region, including those during both initial reading and re-reading, were included.

Both first-pass and total times combine fixations that are spatially contiguous, as noted by Liversedge, Paterson, and Pickering (1998). Another approach described by Liversedge et al. is to sum fixations that are temporally rather than spatially contiguous. “Regression-path reading time” is a measure that combines all temporally contiguous fixations from the time the reader first reaches the region of interest until the reader first advances to the right beyond that region. This is an important distinction, since if the current region causes the reader to regress to earlier portions of the sentence, the fixations on earlier regions resulting from those regressions will be included in the regression-path time for the current region. If readers respond to processing difficulty by regressing to earlier portions of the sentence rather than by slowing down or rereading the same region, both the proportion of trials on which the first pass ends with a regression and the regression-path reading time will be increased. Thus, both of these measures can reveal effects that are missed by both first-pass time and total-time. (See Liversedge et al., 1998, for a more complete discussion, and Clifton, Traxler, Mohamed, Williams, Morris, & Rayner, 2003, for another recent paper using the regression path reading time measure.)

Results

As in Experiment 1, accuracy in answering the comprehension questions was generally high, with 90% correct performance overall. Also as in Experiment 1, questions following sentences containing Direct-object bias verbs tended to be answered 2% less accurately than those following sentences containing Clause-bias verbs, which was reliable only by participants (F1(1,74) = 5.9, p < .05; F2 < 1; mean difference = −0.018, 95% CI = −0.03 to 0.003). Unlike Experiment 1, there was also a main effect of sentence structure on accuracy by participants, with 1.4% fewer correct responses after sentences with a clause structure (F1(2,148) = 3.5, p < .05; F2 < 1; mean difference = 0.014, 95% CI −0.003 to 0.03). As in Experiment 1, there was also an interaction between verb bias and sentence structure by participants (F1(2,148) = 6.5, p < .01; F2(2,152) = 2.4, p > .1; MinF′(2,126) = 1.8, p > .1), again because the effect of verb bias was greater for sentences with direct object continuations (5%) than for sentences with embedded clause continuations (1.8%).

Sentences were divided into regions as in Experiment 1. (The critical disambiguating region is underlined below.)

  • 5. The ticket agent/admitted/that/the mistake/might not/have been caught.

Trials included in the analysis had to meet the following criteria. First, entire trials were excluded from analysis if the comprehension question was not answered correctly or if the eyetracker lost track of the eye for an extended period, resulting in a loss of 9% and < 1% of the data, respectively. Second, an entire region was excluded from the analysis for first-pass times, regression-path times, or total times if the eyetracker lost track of the participant’s eyes in that region during collection of the data contributing to that measure, resulting in a further loss of < 1% of the data from participants included in the analysis. Third, a region was excluded from the analysis of first-pass reading time and regression-path time if the participant fixated on any word to the right of the region before fixating on the region itself, resulting in a loss of approximately 2% of the data in every condition except those containing the complementizer that, where approximately 5% of the data were lost. As in Experiment 1, times for a particular participant at a particular region greater than three standard deviations from the mean were replaced with the cutoff value at three standard deviations from the mean, resulting in approximately 1.6% of the data being replaced in first pass times and total times, and 1.5% of the data being replaced in regression-path duration times.

Cells with three or fewer trials remaining in a condition for a participant or an item after all of the exclusion criteria were applied were replaced with the grand mean across conditions for the individual participant or the individual item at the appropriate region of the sentence, in order to eliminate empty or extremely noisy cells (1.4% of the data were replaced in this way). Finally, participants who had 5 or more cells (out of the 34 condition × region cells in the design) replaced were excluded entirely from the analysis, resulting in the loss of 8 participants. All results reported below are based on the remaining 75 participants.

As in Experiment 1, two different analyses of variance (ANOVAs) were conducted on partially overlapping subsets of the design. First, a 2 (verb bias) × 2 (ambiguity) ANOVA that included only the sentences with embedded clauses was performed, to evaluate the joint effects of verb bias and ambiguity. Second, a 2 (verb bias) × 2 (sentence continuation) ANOVA was performed to evaluate the effects of verb bias in temporarily ambiguous sentences where the ambiguity resolved as either a direct object structure or an embedded clause structure. All analyses were performed on length-corrected reading times by both participants and items, calculated in the same way as described for Experiment 1.

Also as in Experiment 1, two additional sets of ANOVAs were performed, one on a subset of 56 items that equated the rated plausibility of items with the two types of verbs, and the other on uncorrected reading times on just the first word of the two-word disambiguating regions (see Appendix 3 for uncorrected reading times). Two items that were dropped from the analyses in Experiment 1 were included here because errors in them were fixed.

Disambiguating region

Once again, results at the disambiguating region are the most important ones for distinguishing among processing theories, so those will be described first. Also, since the primary motivation for the eyetracking study was to obtain measures of early processing, first-pass reading times will be described first, followed by regression-path times and then total times.

First-pass reading times

First-pass reading times at each region are summarized in Table 4, first-pass times at just the disambiguating region are shown in Figures 2a and 2b, and the results of the ANOVAs for that region are summarized in Table 5. As noted in the table, the analysis including only sentences with embedded clauses revealed that first-pass reading times at the disambiguating region were significantly slower in temporarily ambiguous sentences than in unambiguous ones, resulting in a main effect of ambiguity. First-pass times on the disambiguation were also significantly slower overall in embedded clause sentences with Direct-object bias verbs than in those with Clause-bias verbs, leading to a main effect of verb bias that was reliable by participants but not by items. Although ambiguity slowed first-pass reading more in sentences with Direct-object bias verbs than in ones with Clause-bias verbs, the interaction between verb bias and ambiguity was only marginal by both participants and items. This particular interaction has been found to be more robust in several previous studies (e.g., Garnsey et al, 1997; Trueswell et al., 1993; see, however, Kennison, 2001). The interaction failed to reach reliability here in part because there was a small effect of verb bias in the unambiguous sentences (4 msec) as well as in the ambiguous ones (24 msec). Although the 24-msec difference in ambiguous sentences exceeded its 95% confidence interval of 17 msec by participants and 21 msec by items, the overall pattern was not robust enough to produce a reliable interaction.

Table 4.

Mean residual reading times in Experiment 2 (Msec)

Subject NP admitted/confirmed (that) Post-verb NP Disambig. Rest of sentence
1st RP T 1st RP T 1st RP T 1st RP T 1st RP T 1st RP T
Clause-bias verb + that + Clause 88 - 37 −9 −25 −53 42 −8 −31 −60 −30 −89 −43 −46 −16 −59 - −26
Clause-bias verb + Clause 84 - 39 −7 −19 −16 - - - −32 −19 −56 −27 −12 13 −62 - −58
Clause-bias verb + DO 95 - 45 2 −19 10 - - - −21 −7 −31 −32 4 −30 −53 - −64
DO-bias verb + that + Clause 84 - 34 −14 −34 −53 49 0 −8 −58 −13 −62 −40 −26 −28 −16 - −37
DO-bias verb + Clause 83 - 51 −5 −36 23 - - - −43 −30 11 −2 34 110 −69 - 33
DO-bias verb + DO 80 - 15 −7 −25 −27 - - - −22 −25 −66 −43 −28 −51 −58 - −94

Note. 1st= Mean residual first pass reading times (not calculated at first or last sentence positions); RP= Mean residual regression path times; T= Mean residual total times.

Figure 2.

Figure 2

Residual first-pass reading times at the disambiguating region in Experiment 2

Table 5.

ANOVAs for length-corrected times at the disambiguating region for all reading time measures in Experiment 2

By participants By items MinF′
Source F1 df F2 df MinF′ df
First-Pass Time ANOVA1 Verb 5.2* 1,74 2.5 1,76 1.7 1,135
Ambiguity 15.8** 1,74 15.9** 1,76 7.9** 1,151
V × A 3.4! 1,74 3.3! 1,76 1.7 1,151
ANOVA2 Verb 1.6 1,74 1.1 1,76 <1
Structure 12.4** 1,74 7.2** 1,76 4.6* 1,142
V × S 7.8** 1,74 5.0* 1,76 3.1! 1,145
Regression-Path Time ANOVA1 Verb 11.4** 1,74 7.3** 1,76 4.4* 1,144
Ambiguity 17.8** 1,74 20.5** 1,76 9.5** 1,150
V × A 1.6 1,74 3.1! 1,76 1.1 1,137
ANOVA2 Verb <1 1,74 <1 1,76 <1
Structure 3.5! 1,74 2.6 1,76 1.5 1,148
V × S 8.9** 1,74 10.8** 1,76 4.9* 1,149
Total Time ANOVA1 Verb 12.1** 1,74 2.9! 1,76 2.3 1,110
Ambiguity 31.7** 1,74 29.9** 1,76 15.4** 1,151
V × A 18.1** 1,74 21.1** 1,76 9.7** 1,150
ANOVA2 Verb 54.4** 1,74 5.0* 1,76 3.4! 1,134
Structure 39.6** 1,74 28.3** 1,76 16.5** 1,147
V × S 23.5** 1,74 10.9** 1,76 7.4** 1,134

Note. ANOVA1 includes only temporarily ambiguous and unambiguous sentences with Clause continuations. (Clause-bias examples: “The ticket agent admitted the mistake might not have been caught” vs. “The ticket agent admitted that the mistake might not have been caught”). ANOVA2 includes only temporarily ambiguous sentences with either Clause or DO continuations. (Clause-bias examples: “The ticket agent admitted the mistake might not have been caught” vs. “The ticket agent admitted the mistake because she had been caught”).

*

p < .05;

**

p < .01;

!

p < .10

As in Experiment 1, for our purposes the most informative test of the effect of verb bias comes from the second ANOVA, which included only ambiguous (that-less) sentences that have the two kinds of continuations (see Figure 2b and compare to Figure 1b). Also as in Experiment 1, the most important result in this analysis was a reliable interaction between verb bias and sentence continuation, reflecting the fact that people’s first pass on the disambiguating region was longer whenever the sentence continuation did not match the main verb’s bias. Pairwise comparisons showed that embedded clause continuations were read 24 msec slower after Direct-object bias verbs than after Clause-bias verbs, exceeding the 95% confidence interval on the difference of 17 msec by participants and 21 msec by items. However, unlike Experiment 1, the 11-msec difference in the opposite direction in sentences with direct object continuations did not exceed the 95% confidence intervals of 16 msec by participants and 21 msec by items. Thus, unlike moving-window reading times in Experiment 1, first-pass times on the two-word disambiguating region in sentences with direct object continuations were not reliably slower after Clause-bias verbs than after Direct-object bias verbs, though there was an 11-msec difference in the expected direction. Finally, there was also a main effect of sentence continuation in first-pass times on the two-word disambiguating region, with first-pass times 23 msec slower overall in sentences with clause continuations than in sentences with direct object continuations. The main effect of verb bias was not reliable.

The pattern of results was similar when uncorrected first-pass times on just the first word of the disambiguating region were analyzed, but less so when the two-word region was analyzed for a subset of 56 items that equated rated plausibility across verb type. These results are summarized in Appendix 3. Given that results for this second analysis changed slightly when comparing only the subset of sentences with equal plausibility, it is possible that the effect observed in first-pass times in sentences with direct object continuations were due to small differences in plausibility rather than to verb bias. We will return to this point in the discussion section below.

Regression-path time

Readers typically respond to difficulty by either slowing down or going back to re-read earlier sentence regions, or both. The analysis of first pass times showed that readers slowed down reliably when a clause continuation followed a Direct-object bias verb but that when a direct object continuation followed a Clause-bias verb, their slowing did not reach reliability. An examination of how often the first pass on the disambiguating region ended with a regression suggested that readers might be having difficulty that was not adequately captured by the first pass time measure. In both direct object and clause-continuation sentences, the first pass on the disambiguation was slightly more likely to end with a regression whenever the continuation was not the one predicted by verb bias (16% vs. 14%, for both kinds of continuation). Although this difference was small and did not lead to any reliable effects in an analysis of the proportions themselves, it did suggest that it might be informative to examine what happened following regressions from the disambiguating region. As described earlier, regression-path reading time is a measure that captures both slowing down and what happens following regressions. Regression-path times for each region are summarized in Table 4, the pattern at the disambiguating region is shown in Figure 3, and the ANOVA results at the disambiguating region are shown in Table 5. Regression-path times were not calculated for the main subject noun phrase region of the sentence since there was nowhere to regress to, or for the final region of the sentence since many of the regressions originating there are actually sweeps back to the left side of the screen in preparation for reading the comprehension question following the sentence.

Figure 3.

Figure 3

Residual regression-path reading times at the disambiguating region in Experiment 2

At the disambiguating region, regression-path results were generally similar to the first-pass times reported above, with one important exception, which is apparent from comparing Figures 2b and 3. In comparison to sentences where the verb was followed by the kind of continuation it predicted, readers regressed at the end of their first pass on the disambiguation and spent more time re-reading earlier sentence regions both when a Clause-bias verb was followed by a direct object continuation (32 msec more than Direct object-bias verb + direct object, exceeding the 31-msec 95% confidence interval on the difference by participants, but not the 40-msec 95% confidence interval by items), and when a Direct-object bias verb was followed by a clause continuation (46 msec more than Clause-bias verb + clause, exceeding the 95% confidence intervals on the difference of 41 msec by participants and 35 msec by items). This pattern led to a reliable interaction between verb bias and sentence continuation by both participants and items. No other differences at the disambiguation or any other regions of the sentence were reliable in the analysis of the regression-path measure in temporarily ambiguous sentences (ANOVA2 in Table 5).

When non-length-corrected regression-path times were analyzed for just the first word of the two-word disambiguating region, the pattern was similar. These results are summarized in Appendix 3. When the subset of 56 items that equated rated plausibility across verb type was analyzed, the pattern of results was numerically similar but statistically weaker. These results are also summarized in Appendix 3.

The regression-path measure showed that a mismatch between verb bias and sentence continuation appeared to cause difficulty in both kinds of sentences, but the difficulty was still more robust when a clause continuation followed a Direct-object bias verb than when a direct object continuation followed a Clause-bias verb, as predicted in the introduction. In addition, the nature of the response to that difficulty differed somewhat in these two conditions. When an embedded clause was initially encountered following a Direct-object bias verb, readers both slowed down and went back to re-read, but when a direct object was initially encountered following a Clause-bias verb, they mainly immediately went back to re-read without first slowing down much on the disambiguation. Possible reasons for these different kinds of response to difficulty will be addressed later in the discussion section.

Total times

Total reading times are shown in Figures 4a and 4b. Total times at the disambiguating region showed a similar pattern of results to the other two measures, though they were most similar to first-pass times, since total times and first-pass times both combine spatially contiguous fixations.

Figure 4.

Figure 4

Residual total reading times at the disambiguating region in Experiment 2

In the ANOVA including only sentences with embedded that-clauses (see Figure 4a), readers spent longer on the disambiguating region in temporarily ambiguous sentences (without that) than in unambiguous versions, resulting in a main effect of ambiguity. However, this effect was larger in sentences with Direct-object bias verbs (138 msec) than in sentences with Clause-bias verbs (29 msec), leading to a reliable interaction between verb bias and ambiguity. Total reading times were also longer overall in sentences with Direct-object bias verbs than in ones with Clause-bias verbs, resulting in a main effect of verb bias that was reliable by participants and marginal by items.

Figure 4b shows the pattern of total reading times in temporarily ambiguous sentences with both kinds of continuations. People spent 102 msec more time on the disambiguating region of sentences with embedded clauses than in sentences with direct objects, resulting in a main effect of sentence continuation. There was also a main effect of verb bias, with longer total times in sentences with Direct-object bias verbs than in sentences with Clause-bias verbs. In addition, sentence continuation interacted reliably with verb bias.

Pairwise comparisons showed that the interaction in Figure 4b came from longer times when sentence continuation mismatched verb bias. Embedded clause continuations were read 97 msec more slowly after Direct-object bias verbs than after Clause-bias verbs, exceeding the 95% confidence interval on the difference of 38 msec by participants and 58 msec by items). However, direct object continuations were read only 21 msec slower after Clause-bias verbs than after Direct-object bias verbs, which did not exceed the 95% confidence interval on the difference of 28 msec by participants and 49 msec by items. Followup analyses of uncorrected times on just the first disambiguating word and on a plausibility-rating-matched subset of items are not reported for total times, since those analyses were intended to provide evidence relevant to interpreting effects observed in reading time measures that tap into early processing stages.

The temporarily ambiguous noun phrase

ANOVA results at the temporarily ambiguous noun phrase are shown in Table 6 for all measures. Neither first-pass times nor regression-path times on the temporarily ambiguous noun phrase showed any effect of the bias of the preceding verb in any of the ANOVAs (all Fs <= 2). There were, however, effects of ambiguity in the ANOVA evaluating the effects of ambiguity in sentences that contained embedded clauses. First-pass times on noun phrases preceded by that were 44 msec shorter than on noun phrases not preceded by that, which was reliable by both participants and items.

Table 6.

Analysis of variance results at the temporarily ambiguous NP region for all reading time measures in Experiment 2

By participants By items MinF′
Source F1 df F2 df MinF′ Df
First-Pass Time ANOVA1 Verb <1 1,74 <1 1,76 <1
Ambiguity 7.5** 1,74 12.9** 1,76 4.7* 1,140
V × A 1.8 1,74 1.5 1,76 <1
ANOVA2 Verb 1.2 1,74 <1 1,76 <1
Structure 5.9* 1,74 4.6* 1,76 2.6 1,149
V × S <1 1,74 1.9 1,76 <1
Regression-Path Time ANOVA1 Verb <1 1,74 <1 1,76 <1
Ambiguity <1 1,74 <1 1,76 <1
V × A 4.9* 1,74 2.7 1,76 1.7 1,140
ANOVA2 Verb 2.4 1,74 <1 1,76 <1
Structure <1 1,74 <1 1,76 <1
V × S <1 1,74 <1 1,76 <1
Total Time ANOVA1 Verb 22.7** 1,74 4.0* 1,76 3.4! 1,102
Ambiguity 9.2** 1,74 18.8** 1,76 6.2* 1,135
V × A 2.7 1,74 3.4! 1,76 1.5 1,149
ANOVA2 Verb 2.1 1,74 <1 1,76 <1
Structure 3.4! 1,74 4.0* 1,76 1.8 1,150
V × S 10.8** 1,74 11.4** 1,76 7.8** 1,133

Note. ANOVA1 includes only temporarily ambiguous and unambiguous sentences with Clause continuations. (Clause-bias examples: “The ticket agent admitted the mistake might not have been caught” vs. “The ticket agent admitted that the mistake might not have been caught”). ANOVA2 includes only temporarily ambiguous sentences with either Clause or DO continuations. (Clause-bias examples: “The ticket agent admitted the mistake might not have been caught” vs. “The ticket agent admitted the mistake because she had been caught”).

*

p < .05;

**

p < .01;

!

p < .10

In the ANOVA evaluating the effect of the sentence continuation in temporarily ambiguous sentences only, there was also a reliable effect of the continuation following the noun phrase in first-pass times, with noun phrases followed by clause continuations being read 15 msec faster than those followed by direct object continuations. Since the sentences with different continuations were still identical at this point, the effect in first-pass times must be due to preview. One possible explanation for the direction of this effect is that the first word of the disambiguating region was a bit more frequent for clause continuations than for direct object continuations (1070 vs 901, on average; see Table 1), so to the extent that there was any preview of the next word, that word was a bit more familiar on average in clause continuations.

Other regions in the sentence

At the final region of the sentence, there was an interaction between ambiguity and verb type in both first-pass times and in total times, because there was a larger effect of ambiguity at the ends of sentences with Direct-object bias verbs than in sentences with Clause-bias verbs.

In the unambiguous conditions, first-pass times on the complementizer that were slightly faster after Clause-bias verbs than after Direct-object bias verbs, by 7 msec by participants, equaling the 95% confidence interval on the difference of 7 msec by participants, and by 13 msec by items, exceeding the 95% confidence interval on the difference of 11 msec. The same pattern was found in total times, which were 23 msec shorter after Clause-bias verbs by participants, exceeding the 95% confidence interval of 16 msec, and 33 msec faster by items, exceeding the 95% confidence interval of 24 msec. In regression-path times, a difference in the same direction did not reach reliability. The effect of verb bias in first-pass times suggests that comprehenders found a complementizer more felicitous after Clause-bias verbs than after Direct-object bias verbs. These results replicate similar results in Garnsey et al. (1997).

Correlations between reading times and verb properties

Correlations between properties of the verbs and reading times were analyzed just as in Experiment 1, and are reported in Table 7 alongside those from Experiment 1. Verb properties correlated reliably with reading times in much the same way in both experiments, suggesting that verb properties began to influence reading times quite rapidly. Since the most important aspect of the current study is the comparison of temporarily ambiguous sentences with direct object and clause continuations, in the interests of brevity correlation results will be reported only for that comparison.

Table 7.

Correlations between verb properties and reading time measures at the disambiguating region in temporarily ambiguous sentences, across Experiments 1 and 2

Verb Properties
Clause-bias DO-bias That-preference
Expt.1 Reading Times Expt. 2 First-pass Times Expt. 2 Regress-path Times Expt.1 Reading Times Expt. 2 First-pass Times Expt. 2 Regress-path Times Expt.1 Reading Times Expt. 2 First-pass Times Expt. 2 Regress-path Times
Clause continuation −.25* −.13 −.32** .33** .24* .27* .39** .21 .38**
DO continuation .12 .14 .08 −.21! −.13 −.20! −.22! −.17! −.16
Difference (C - DO) −.28* −.19! −.27* .40** .27* .32** .46** .28* .37**
!

Note: = p < .1,

*

= p < .05,

**

= p < .01

Correlations above and below dashed line are reliably different from each other at p < .05

Reading times were longer at the disambiguating region in clause continuations after verbs with stronger Direct-object bias (first pass r = .27, p < .05; regression-path r = .32, p < .01) and stronger that-preference (first pass r = .28, p < .05; regression-path r = .37, p < .01), just as was found for moving window times in Experiment 1. Also as in Experiment 1, reading times on the disambiguation toward the clause interpretation were shorter following verbs with stronger Clause-bias (first pass r = −.19, p < .1; regression path r = −.27, p < .05).

In Experiment 1, the only verb property that reliably affected reading times on the complementizer that in the unambiguous sentences was the overall frequency of the verb. In contrast, in Experiment 2 first-pass times on that were reliably affected by verbs’ Clause-bias strength (r = −.28, p < .01), Direct-object bias strength (r = .29, p < .01), and that-preference (r = .23, p < .05).

The overall pattern of the correlation results across both studies will be addressed in more detail in the general discussion section.

Discussion

The results for the subset of the Experiment 2 design including only ambiguous and unambiguous sentences with embedded clauses showed that Clause-bias verbs reliably eased the comprehension of the ambiguous versions, thereby replicating several previous studies (Garnsey et al., 1997; Trueswell et al., 1993) as well as Experiment 1 here. This effect was observed in all three of the measures used in Experiment 2: first-pass reading time, regression-path time, and total reading time.

More important is what happened in ambiguous sentences with Clause and direct object continuations. The interaction between verb bias and sentence continuation reported in Experiment 1 was also replicated in all three reading-time measures in Experiment 2. In all measures, the interaction reflected the fact that readers slowed down whenever the sentence continuation violated verb-based expectations. However, the reliability levels of an important component of the interaction varied across the three measures. While the degree of slowing when a clause continuation followed a Direct-object bias verb was robust enough to reach reliability in all measures, the slowing observed when a direct object continuation followed a Clause-bias verb reached reliability only in the regression-path reading time measure, with trends in the predicted direction in the other measures. Thus, readers’ immediate response to violations of verb-based expectations was somewhat different in the two kinds of sentence continuations.

In Clause continuations, readers stayed on the disambiguation long enough to produce reliably longer first-pass times, and then they also went back and reread earlier sentence regions, resulting in reliably longer regression-path times as well. In contrast, in direct object continuations, readers went straight back and re-read earlier regions without staying on the disambiguation long enough to produce reliably longer first-pass times. Thus, both kinds of violation of verb-based expectations led to difficulty, but the response to that difficulty differed. One possible explanation for this differing response has to do with what readers encountered, compared to what they were expecting. Embedded clauses are both structurally and conceptually more complex than direct objects. Clause-bias verbs led readers to expect a complex clause, but what they found instead was a simple direct object, while Direct-object bias verbs led readers to expect a simple direct object and what they found instead was a complex clause. It may be harder to recover from encountering something complex when you expect something simple than it is to recover from encountering something simple when you expect something complex. If so, this would explain why readers both slowed down (first-pass times) and went back and reread earlier regions (regression-path times) when they encountered something complex when they were expecting something simple, but they mainly went back and reread earlier regions when they encountered something simple when they expected something complex.

It is important to head off one potential misunderstanding at this point. A rather undesirable feature of regression-path time is that it obviously conflates initial analysis time and reanalysis time, perhaps even more than first-pass time does. However, the fact that verb bias reliably affected only regression-path times in sentences with direct object continuations does not mean that verb bias effects were limited to reanalysis. Recall that the reason sentences with direct object continuations were included in our studies was that the Garden Path Model predicts that such sentences should never be reanalyzed. Although much of the time spent re-reading earlier regions may have been spent actually reanalyzing the sentence, the fact that re-reading was triggered by a direct object continuation after a Clause-bias verb shows that verb bias operates in the comprehension system even when reanalysis should not be necessary. If verb bias is restricted solely to reanalysis, there should never be a need for re-reading a direct object continuation. However, readers went back and re-read earlier sentence regions when they encountered a direct object continuation after being misled by a Clause-bias verb to expect something else. The only possible reason for sentences with direct object continuations to require such reanalysis, which the regression-path times show they did, is that verb bias led readers to expect a clause continuation. In other words, verb bias guided their initial interpretation, not just their reanalysis. Thus, verb bias influenced the comprehension of the structurally simpler alternative interpretation of an ambiguity, contrary to the predictions of modular two-stage models like the Garden Path Model.

In sum, the results of Experiment 2 confirmed Experiment 1 by showing that verb bias influenced the time spent reading the disambiguation, both in sentences with clause continuations and in those with direct object continuations. These effects, though, were generally smaller and more variable in sentences with direct object continuations. They were also sometimes weaker in subset analyses, likely secondary to the reduced power of our analysis (see Appendix 3). As noted above, obtaining higher plausibility ratings for direct objects following Clause-bias verbs would have required using verb/noun pairings that are actually even more plausible than the ones that produce the same ratings for Direct-object bias verbs. We concluded that our materials were sufficiently matched in plausibility, given that Garnsey et al. (1997) found no early effects of far larger plausibility differences (3.6 on the same rating scale, compared to 0.8 here) on initial reading times when verbs were strongly biased. Presumably, if lower plausibility ratings for mismatching sentence continuations could solely have explained the effects reported above, these effects would have been equally present in both types of continuations.

General Discussion

The studies reported here addressed two questions about the structure of the human sentence comprehension system. First, when is information about verb bias used to make parsing decisions? Second, does verb bias influence reading times in sentences with the simpler possible structure for an ambiguity, as predicted by interactive models of comprehension but not by modular two-stage theories? In answer to the first question, the findings of both Experiments 1 and 2 confirm several previous studies in suggesting that verb bias is used almost immediately in parsing sentences. The strongest evidence here comes from first-pass reading times in Experiment 2, because that is the measure that most clearly taps into initial processing. In answer to the second question, the results of both experiments reveal influences of verb bias in sentences with simple direct object structures. In both experiments, direct object continuations were read more slowly after verbs that strongly predicted an embedded clause than after verbs that predicted direct objects.

According to Frazier (1995) and Binder, Duffy, and Rayner (2001), a key test for interactive language processing theories is to show verb bias effects in the simpler of the alternative possible structures for a given kind of ambiguous sentence. As Frazier notes, constraint satisfaction versions of interactive models assume that “…it is completely accidental…that garden paths have not been demonstrated in simple syntactic structures.”(p. 442). A demonstration of garden-path effects in the simpler and thus preferred analysis, according to the Garden Path model, is a key test of constraint-satisfaction models because verb bias effects in more complex structures like the clause continuations illustrated earlier in (2a) and (3a) could be due to ease of recovery from an initial misanalysis rather than the initial analysis itself. The results reported here show for the first time that when verb bias provides a strong cue, it influences even the processing of simple syntactic structures. This finding argues against modular two-stage models which do not permit the parser to access verb bias information. Such models interpret all verb bias effects as being due to reanalysis after initially misinterpreting an ambiguity in the structurally simplest way. On such an account, no reanalysis should ever be necessary in simple structures like direct objects and thus there should be no opportunity for verb bias to have any effect.

Very few previous studies have taken a similar approach by investigating the effects of various kinds of cues on the comprehension of temporarily ambiguous sentences that resolve in the simplest possible way (see, however, Binder, Duffy, & Rayner, 2001; Kennison, 2001; Pearlmutter & MacDonald, 1995; and van Berkum, Brown, & Hagoort, 1999). In one such study, Binder and colleagues investigated sentences that were temporarily ambiguous between a simple structure and a more complex embedded relative clause structure (e.g., The wife deserted her unfaithful husband and moved to another country vs. The wife deserted by her unfaithful husband moved to another country), and manipulated both the thematic fit of the initial noun (e.g., wife) as an agent or theme of the verb following it (e.g., deserted), and also whether a preceding context paragraph contained an obvious referent for the definite noun phrase the wife. They found that both kinds of cues affected both first- and second-pass reading times on the disambiguation when the sentence resolved as a complex reduced relative structure, but crucially did not when it resolved with the simpler structure. They argued that the absence of context effects in simple sentences shows that their presence in complex sentences must be due to reanalysis.

The kinds of cues investigated by Binder and colleagues are ones that require combinatorial processing of either the plausibility of particular noun-verb combinations within the critical sentence or referential context from outside the critical sentence. The cue that was investigated here, namely the statistical frequency with which verbs combine with different argument structures or verb bias, may become available to the processing system more quickly, since determining the thematic fit between a noun and verb may require more extensive processing than simply choosing between a verb’s subcategorization preferences. In a previous study examining the relative contributions of verb bias and the plausibility of particular verb-noun combinations to sentence comprehension, Garnsey and colleagues (Garnsey, Pearlmutter, Myers, & Lotocky, 1997) found that verb bias had earlier and stronger effects than the plausibility of particular verb-noun combinations. Thus, verb bias may influence the processing of sentences with simple structures even though the cues investigated by Binder and colleagues apparently did not.

The studies reported here used a different kind of sentence than Binder and colleagues’ studies did, i.e. sentences like those illustrated earlier in example (1a), which have a temporary ambiguity about whether or not a noun immediately following a verb is its direct object. A large number of previous studies have investigated this particular type of sentence (e.g., Frazier & Rayner, 1982; Holmes, Kennedy, & Murray, 1988; Kennedy, Murray, Jennings, & Reid, 1989; Rayner & Frazier, 1987; Traxler & Pickering, 1996), including several that have specifically examined verb bias (Ferreira & Henderson, 1990; Henderson & Ferreira, 1990; Holmes & Forster, 1972; Holmes, Stowe, & Cupples, 1989; Jennings, Randall, & Tyler, 1997; Kennison, 2001; Mitchell & Holmes, 1985; Osterhout, Holcomb, & Swinney, 1994; Pickering & Traxler, 1998; Pickering, Traxler, & Crocker, 2000; Schmauder & Egan, 1998; Schneider & Phillips, 2001; Sturt, Pickering, Scheepers, & Crocker, 2001; Sturt, Pickering, & Crocker, 1999; Trueswell, Tanenhaus, & Kello, 1993; Trueswell & Kim, 1998). Since early versions of the Garden-Path Model proposed that the first-stage parser would not make use of probability information tied to particular lexical items, studies investigating the timing of verb bias effects were devised to test the model’s predictions In one early study, for instance, Ferreira and Henderson (1990) found that readers’ first fixations on the disambiguating embedded verb needed in example (6a) below were as slow as those on needed in (6b), even though the main verb wished is much more likely to be followed by embedded clauses than by direct objects, while the reverse is true for forgot.

  • 6a. He wished Pam needed a ride home

  • 6b. He forgot Pam needed a ride home.

Unlike first fixations, total reading times did show effects of verb bias, leading Ferreira and Henderson to conclude that verb bias influences a later reanalysis stage rather than the earliest stages of interpretation.

In contrast, other researchers (Trueswell et al., 1993; Garnsey et al., 1997) have found that first-pass reading times on the disambiguating words in sentences like examples (1a) and (6) are influenced by verb bias. More specifically, when the main verb predicts an embedded clause, first-pass reading times on the disambiguating words in an embedded clause are no slower in temporarily ambiguous sentences than in unambiguous control sentences (which are disambiguated by including the complementizer that). In one such demonstration of the early use of verb bias in the comprehension of such sentences, Trueswell and Kim (1998) used a self-paced moving window paradigm with a “fast-priming” technique that had been previously developed by Sereno and Rayner (1992). When readers reached the position of the main verb of the sentence (e.g., accepted in example 3 below), a different verb (e.g., realized) was displayed too briefly for the participants to identify it, and was then immediately replaced with the target verb which then remained on the screen. Trueswell and Kim used sentences like (7) below, in which the main verb has a Direct-object bias but the sentence has an embedded clause structure, and manipulated the bias of the prime verbs.

  • 7. The talented photographer accepted the fire could not have been prevented.

Since the continuation mismatches the verb’s bias in example (7), comprehenders typically have difficulty at the embedded verb (could). However, Trueswell and Kim found that if the briefly displayed prime verb was biased towards embedded clauses (e.g., realized), the difficulty at could was reduced. If the prime verb was instead biased towards direct objects (e.g., obtained), the difficulty at could was increased. The authors concluded that the bias of the prime verb was rapidly retrieved, since the prime was only displayed for 39 milliseconds, and thus is a type of information that is available early enough to influence initial stages of sentence processing.

Although several studies have found that verb bias is used as early as can be measured during sentence processing, others have not (Kennison, 2001; Pickering & Traxler, 2003). Furthermore, since all but one of the studies so far have taken the approach of using verb bias to ease comprehension when sentences turn out to have the more complex of their possible structures, it remains possible that observed verb bias effects could be due to reanalysis rather than first-stage parsing. A study by Kennison (2001), however, took the alternative approach recommended by Frazier, i.e., examining whether verb bias has any effect on the comprehension of sentences that turn out to have the simpler direct object structure. Since such sentences should never be reanalyzed, any effects of verb bias on their comprehension would have to be due to first-pass processes rather than reanalysis. Kennison found no effects of verb bias on first-pass reading times in such sentences and concluded that the effects observed in other studies are due to reanalysis. However, her conclusion is called into question because she also found no effects of verb bias in sentences with the more complex embedded clause structure. The absence of previously well-established verb bias effects in sentences with embedded clause continuations raises questions about their parallel absence in the sentences with direct object continuations.

Our results conflict with those obtained in a similar study by Kennison (2001). As in Experiment 2 here, Kennison (2001) presented participants with items that had either a direct object or an embedded clause structure, and used an eyetracker to monitor reading. Unlike the present study, however, Kennison found no effect of verb bias in sentences containing direct objects. In fact, she found no interaction between verb bias and sentence continuation at all, because there was also no effect in sentences with clause continuations. The absence of verb bias effects in sentences with clause continuations in Kennison’s study, when such effects have been replicated many times in other studies, including those reported here, raises questions about the strength of the bias of the verbs she used.

According to Kennison’s norming data, her Clause-bias verbs were followed by clauses about twice as often as by direct objects. On average, our Clause-bias verbs were about 5 times as likely to be followed by clauses as by direct objects. In addition, two of Kennison’s Clause-bias verbs would be categorized instead as Direct-object bias according to our norms, and another four as equi-biased. This might help explain why Kennison did not find verb bias effects in sentences with embedded clauses.

However, differing strengths of verb bias alone cannot explain the discrepancy between our results and Kennison’s for sentences containing direct objects, because Kennison’s Direct-object bias verbs were quite strongly biased - even more than ours, in fact. According to Kennison’s norms, her Direct-object bias verbs were 6–7 times more likely to be followed by direct objects than by clauses, while according to our norms our Direct-object bias verbs were only about 5–6 times more likely to be followed by direct objects. If we apply our norming data to Kennison’s Direct-object bias verbs, 3 of them would be categorized as equi-bias, but none would be categorized as Clause-bias, and our norms confirm that her Direct-object bias verbs were indeed quite strongly Direct-object biased. Thus, while weaker Clause-bias may help explain why Kennison did not get verb bias effects in sentences with embedded clauses, the absence of verb bias effects in her sentences with direct objects cannot be similarly explained.

However, results from Hare, McRae, and Elman (2003) suggest that it may be difficult to get accurate estimates of Direct-object bias in particular, which could lead to discrepancies across studies that investigate the effects of Direct-object bias. Hare et al. (2003) reported that verb bias is influenced by verb meaning, which is influenced in turn by properties of the nouns surrounding the verbs. For instance, Hare et al. found that a noun following the verb recognize is a direct object 97% of the time when that noun is concrete, as in He recognized the celebrity. But when the noun following recognize is abstract, the verb sense changes somewhat to mean “appreciate” or “understand” (as in The author recognized (that) the problem _____), and the noun is then most likely to be the subject of an embedded clause (75% of the time). Thus, the abstractness of a post-verbal noun can change the likelihood that particular structures will follow. This suggests that the properties of (at least) post-verbal nouns should be taken into account when coding verb norming data.

One limitation of the above study is that that neither our estimate nor Kennison’s estimate of verb bias adequately represent comprehenders’ prior relevant experience with the verbs in naturally occurring sentences. Few of the verbs used by either Kennison or us happen to be ones tested by Hare et al. Since intuition is not adequate to evaluate the rest of the verbs, it is impossible to know how much impact such factors might have had on Kennison’s or our results. However, bias estimation errors are more likely for Direct-object bias verbs than Clause-bias verbs for the following reasons. In virtually all of the experimental items used in Kennison’s study and in ours, the post-verbal noun was abstract, in order to make the embedded clause continuation fit well with the verb. Such structures are particularly felicitous when the post-verbal noun that is the subject of the embedded clause is abstract. The same abstract post-verbal noun was also used in the versions of the sentences that continued with direct object structures, in order to control lexical factors that could influence reading times. An unfortunate consequence of this control, though, is that the experimental items were generally similar to the kinds of sentences that Hare et al. specifically found to promote the use of embedded clauses rather than direct objects. Thus, the abstract post-verbal nouns may have reinforced the bias of the Clause-bias verbs but counteracted the bias of the Direct-object bias verbs somewhat, leading to generally weaker verb bias effects for Direct-object bias verbs. Since the impact of weaker-than-intended Direct-object bias verbs should be seen primarily in sentences with direct object continuations, this may provide yet another reason why verb bias effects were generally weaker in sentences with direct object continuations in our study, and absent in Kennison’s study. While there is no reason to expect the abstractness of the post-verbal nouns in the stimulus sentences to have counteracted Direct-object bias any differently for Kennison’s and our sentences, it seems likely that some verbs are probably more affected than others, so the particular verbs we each happened to use in our studies may have been differentially susceptible to the problem.

In addition to the reading time results reported in this study, another piece of evidence about the importance of verb bias to the comprehension system are the graded effects of verb properties observed in the correlational analyses. In spite of the generally small size of correlations for sentences with direct object continuations, these correlations were always in the predicted direction for both kinds of sentence continuations (ie, always in the opposite direction). For example, in Experiment 1, reading times at the disambiguation toward an embedded clause structure were slower as the verbs’ Direct-object bias increased (r = .33, p < .01) while times at the disambiguation toward a direct object structure were faster as the verbs’ Direct-object bias increased (r = −.21, p < .1). The opposite pattern of effects was found for Clause-bias verbs. Furthermore, correlations between verb properties and the difference between clause continuations and direct object continuations (row 3 in Table 7) were always in the expected direction, and were reliable in all but one case, which was marginal (see Table 7). Thus, although correlations between reading times at the disambiguation toward a direct object structure and verb properties were smaller than those in disambiguation toward an embedded clause, the overall pattern illustrated in Table 7 provides clear evidence that verb bias affected reading times on both kinds of disambiguations. Thus, verb bias seems to have influenced reading times at the disambiguation both in sentences with a more complex embedded clause structure and in ones with a simpler direct object structure, which is not consistent with the claim that verb bias effects are due to reanalysis.

Proponents of modular two-stage processing models have made two kinds of objections to the evidence presented in this paper that verb bias is used early. One has already been raised earlier, i.e. that measurement tools do not yet exist that unambiguously separate first-pass and reanalysis processes, so such effects could be due to reanalysis. The work reported here responded by following Frazier’s suggestion to look for verb bias effects in sentences that should never be reanalyzed, eg. sentences with direct object structures. The second objection is that even incontrovertible evidence that verb bias influences first-pass processing would not rule out a modular processing system. Although early versions of the Garden-Path Model explicitly claimed that information tied to particular lexical items would not be used to guide the initial parse, the core of the modularity debate concerns whether initial parsing decisions are influenced by non-structural information. If verb bias may be thought of as a kind of structural information (although not one typically included in modular theories), then the studies reported here would not provide evidence about the timing of the influence of truly non-structural information on parsing. This argument, however, is a bit of a conundrum for sentence comprehension theories. Given that different meanings of lexically ambiguous verbs (which are typically considered as lexical information, not structural) determine their verb bias (Hare, McRae, & Elman, 2003), an argument that verb bias is actually structural in nature would mean that non-structural information at least partly determines structural information. This in turn would make it difficult to predict in advance which types of information would be considered structural by a serial comprehension system and which would not.

Our results suggest that verb bias routinely guides English sentence interpretation from the beginning, rather than being used only in a later revision stage when reanalysis becomes necessary. Verb bias, of course, is not the only cue affecting comprehension of sentences (ie, Farmer, Anderson, & Spivey, 2007). Thus, despite Frazier’s (1995) comment to the contrary, it is not completely accidental that previous studies have failed to reveal verb bias effects in simple syntactic structures. There is a powerful attraction to the direct object analysis, in part because direct objects are perhaps the most frequent continuation in English sentences. Any language processing mechanism capable of learning the statistical regularities of language will therefore be biased towards expecting direct objects after verbs. It is not surprising, therefore, that effects after Clause-bias verbs are, in general, weaker than effects after Direct-object bias verbs (where a multitude of cues biases the reader towards a direct object interpretation). The studies reported here have shown that a strongly biased cue towards a clause can override the general tendency for direct objects. The effects in sentences with direct object continuations are small, though, especially in comparison with garden-pathing in such classic sentences such as The horse rode past the barn fell. This may reflect a lifetime of experience with sentences continuing as direct objects, and demonstrates that while verb bias is a powerful cue, it is certainly not the only one. The pattern of results reported here, therefore, is most consistent with language processing models in which multiple cues, including verb bias among others, interact to guide the interpretation of sentences. Thus, comprehenders of sentences like (1) above can easily avoid misinterpretation, particularly if verb bias provides clear clues about how the sentence is likely to continue.

Acknowledgments

The authors would like to acknowledge NIMH National Research Service Awards MH18990 and MH19554 to the University of Illinois at Urbana-Champaign, each of which supported the first author at different points as a predoctoral trainee in the Training Programs in Language Processing and in Cognitive Psychophysiology, respectively. NSF SBR 98-73450 also provided partial support for the work reported here.

The authors would also like to thank George McConkie for generously providing the eyetracking facilities used in Experiment 2, as well as Gary Wolverton for invaluable aid in collecting and analyzing the eyetracking data. Finally, we would also like to thank Cindy Fisher, Gary Dell, Shelia Kennison, Neal Pearlmutter, Mike Tanenhaus, and John Trueswell for many thoughtful discussions during the course of this project.

Appendix 1 Experimental Stimuli

Experimental Items

(Note: Underlined item numbers were not included in the analysis of Experiment 1, and the bolded item was not included in the plausibility subset analysis of either study. See the text for explanation)

Clause-bias Verbs

  • 1.

    1. The ticket agent admitted (that) the mistake might not have been caught.

    2. The ticket agent admitted the mistake because she had been caught.

      Did the ticket agent make a mistake?

  • 2.

    1. The new receptionist admitted (that) the error should have been corrected sooner.

    2. The new receptionist admitted the error through a carefully worded apology.

      Was the receptionist new?

  • 3.

    1. The divorce lawyer argued (that) the issue involved facts outside the case.

    2. The divorce lawyer argued the issue despite not knowing the facts.

      Did the lawyer specialize in criminal cases?/Did the lawyer know the facts?

  • 4.

    1. The district attorney argued (that) the point might be impossible to avoid.

    2. The district attorney argued the point despite not really believing it.

      Did a judge argue the point?/Did the attorney believe the point?

  • 5.

    1. The office manager indicated (that) the problem could affect each person there.

    2. The office manager indicated the problem when he met the staff.

      Was there a problem in the office?

  • 6.

    1. The friendly clerk indicated (that) the store should refund the customer’s money.

    2. The friendly clerk indicated the store while driving through the neighborhood.

      Was the clerk rude?

  • 7.

    1. The job applicant believed (that) the interviewer considered all of her answers.

    2. The job applicant believed the interviewer after hearing about her credentials.

      Was the applicant applying for a job?

  • 8.

    1. The naive child believed (that) the fable might not really be true.

    2. The naive child believed the fable because his brother convinced him.

      Did someone tell the naive child a joke?

  • 9.

    1. The weary traveler claimed (that) the luggage would not be left alone.

    2. The weary traveler claimed the luggage when he signed the form.

      Did the traveler have luggage?/Did the traveler sign a form?

  • 10.

    1. The anonymous informant claimed (that) the reward caused him to come forward.

    2. The anonymous informant claimed the reward despite not naming the criminals.

      Was the informant well known?/Did the informant name the criminals?

  • 11.

    1. The reading instructor concluded (that) the lesson stated its point very clearly.

    2. The reading instructor concluded the lesson after making his point clearly.

      Was the instructor teaching reading?/Did the instructor make a point?

  • 12.

    1. The account executive concluded (that) the speech did not go very well.

    2. The account executive concluded the speech after a long dramatic pause.

      Did the executive give a speech?

  • 13

    1. The robbery suspect confessed (that) the crimes took a while to plan.

    2. The robbery suspect confessed the crimes during interrogation by the police.

      Was the suspect innocent of the crimes?

  • 14.

    1. The bank guard confessed (that) the robbery began late in the evening.

    2. The bank guard confessed the robbery because he was being interrogated.

      Did the bank guard help with the robbery?/Did the bank guard confess the robbery?

  • 15.

    1. The cab driver assumed (that) the blame did not belong to him.

    2. The cab driver assumed the blame since he was protecting friends.

      Did the person drive a truck?

  • 16.

    1. The class president assumed (that) the burden caused frustration for his parents.

    2. The class president assumed the burden despite talking with his colleagues.

      Were the parents of the class president frustrated?/Did the class president assume a burden?

  • 17.

    1. The math whiz figured (that) the sum would include some state taxes.

    2. The math whiz figured the sum when he saw the numbers.

      Was the person good at math?

  • 18.

    1. The shrewd salesman figured (that) the prices could be going up soon.

    2. The shrewd salesman figured the prices when he talked to customers.

      Was the salesman concerned about prices?/Was the salesman shrewd?

  • 19.

    1. The union leader implied (that) the raise could mean more than money.

    2. The union leader implied the raise when he met with strikers.

      Did the union get a raise this year?/Did the union leader meet with management?

  • 20.

    1. The famous researcher implied (that) the secret concerned all of the scientists.

    2. The famous researcher implied the secret after he met with reporters.

      Did the researcher know a secret?

  • 21.

    1. The experienced judge decided (that) the appeal should be started right away.

    2. The experienced judge decided the appeal after hearing from both sides.

      Was the judge inexperienced?

  • 22.

    1. The famous judge decided (that) the case should never have been tried.

    2. The famous judge decided the case despite the appeals by protesters.

      Was the judge famous?

  • 23.

    1. The novice plumber realized (that) the mistake could not be fixed immediately.

    2. The novice plumber realized the mistake when his partner looked closely.

      Was the plumber an expert?

  • 24.

    1. The observant detective realized (that) the situation might soon get worse again.

    2. The observant detective realized the situation because he was a veteran.

      Was the detective observant?/Was the detective a veteran?

  • 25.

    1. The careful scientist proved (that) the theory assumed two widely disputed facts.

    2. The careful scientist proved the theory because he designed the Experiment.

      Was the scientist hasty?

  • 26.

    1. The local detectives proved (that) the conspiracy covered up decades of wrongdoing.

    2. The local detectives proved the conspiracy despite having very few leads.

      Were the detectives investigating a conspiracy?

  • 27.

    1. The film director suggested (that) the scene might make the actors famous.

    2. The film director suggested the scene despite the two actors’ objections.

      Were the actors starring in a play?/Did the actors agree to the suggestion?

  • 28.

    1. The travel agent suggested (that) the vacation needed to be taken immediately.

    2. The travel agent suggested the vacation because the couple worked hard.

      Did some neighbors suggest a vacation?

  • 29.

    1. The bus driver worried (that) the passengers might complain to his manager.

    2. The bus driver worried the passengers because he drove very quickly.

      Were there passengers on the bus?/Did the bus driver drive quickly?

  • 30.

    1. The young baby-sitter worried (that) the parents found the missing liquor bottles.

    2. The young baby-sitter worried the parents because she left several messages.

      Was the baby-sitter an old woman?

  • 31.

    1. The timid woman prayed (that) the psalm might help her sick sister.

    2. The timid woman prayed the psalm because her sister was sick.

      Was the timid woman religious?

  • 32.

    1. The spiritual leader prayed (that) the blessing might make them more confident.

    2. The spiritual leader prayed the blessing because it was meal time.

      Was the leader an atheist?

  • 33.

    1. The rejected bachelor inferred (that) the reason concerned her fear of commitment.

    2. The rejected bachelor inferred the reason during one of many talks.

      Did the rejected bachelor guess the reason?

  • 34.

    1. The anxious student inferred (that) the score caused the low semester grade.

    2. The anxious student inferred the score while the professor graded papers.

      Was the student relaxed?

  • 35.

    1. The class clown pretended (that) the limp resulted from a serious injury.

    2. The class clown pretended the limp while talking to the principal.

      Was the class joker limping?

  • 36.

    1. The tired boxer pretended (that) the injury caused his humiliating title defeat.

    2. The tired boxer pretended the injury so that he could rest.

      Was the boxer exhausted?

  • 37.

    1. The teaching assistant hinted (that) the answer showed some very clear reasoning.

    2. The teaching assistant hinted the answer when the class quieted down.

      Did the assistant hint something?

  • 38.

    1. The teaching assistant hinted (that) the solution could not be found easily.

    2. The teaching assistant hinted the solution during the long difficult exam.

      Was the solution an easy one?/Was the exam easy?

  • 39.

    1. The injured woman suspected (that) the teenager created the whole dishonest story.

    2. The injured woman suspected the teenager because he looked somewhat guilty.

      Was the woman injured?/Did the teenager look guilty?

Direct-object bias Verbs

  • 40.

    1. The talented photographer accepted (that) the money might not be legally obtained.

    2. The talented photographer accepted the money because he was asked twice.

      Was the photographer offered money?

  • 41.

    1. The basketball star accepted (that) the contract included a few limiting clauses.

    2. The basketball star accepted the contract because the team paid well.

      Did the athlete play golf?

  • 42.

    1. The newspaper editor advocated (that) the raise needed to be publicly decided.

    2. The newspaper editor advocated the raise despite his fairly poor performance.

      Did the editor publish a magazine?/Did the editor perform his job well?

  • 43.

    1. The new mayor advocated (that) the strategy required more funds from industry.

    2. The new mayor advocated the strategy while meeting with the commissioners.

      Was the mayor new at the job?

  • 44.

    1. The concerned priest asserted (that) the belief might not be morally justified.

    2. The concerned priest asserted the belief during the meeting with parishioners.

      Was the priest concerned?

  • 45.

    1. The concerned congressman asserted (that) the belief might cause a public scandal.

    2. The concerned congressman asserted the belief because of pressure from voters.

      Did the mayor assert the belief?

  • 46.

    1. The CIA director confirmed (that) the rumor could mean a security leak.

    2. The CIA director confirmed the rumor when he testified before Congress.

      Did the CIA director speak about the rumor?/Did the CIA director testify before Congress?

  • 47.

    1. The Coast Guard confirmed (that) the drowning involved a famous foreign actor.

    2. The Coast Guard confirmed the drowning because the media pointedly asked.

      Did the Navy confirm the drowning?

  • 48.

    1. The scuba diver discovered (that) the wreck became hidden by the reef.

    2. The scuba diver discovered the wreck because his partner found it.

      Was the wreck hidden by a reef?/Was the wreck discovered by divers?

  • 49.

    1. The French explorers discovered (that) the treasure could cost them their lives.

    2. The French explorers discovered the treasure when the Americans found it.

      Were the explorers Scottish?

  • 50.

    1. * The angry father emphasized (that) the problems continued to worsen every year.

    2. * The angry father emphasized the problems while meeting with the principal.

      Was the father angry about the problems?/Was the father talking with the principal?

  • 51.

    1. The new boss emphasized (that) the objective should not be abandoned yet.

    2. The new boss emphasized the objective when he spoke to employees.

      Was the boss in favor of changing the objective?

  • 52.

    1. The primary suspect established (that) the alibi did not reflect the truth.

    2. The primary suspect established the alibi when he answered police questions.

      Did the alibi reflect the truth?/Did the police have a suspect in the case?

  • 53.

    1. The school board established (that) the policy could prevent cheating by students.

    2. The school board established the policy when reporters started asking questions.

      Was the policy established by the county commissioners?

  • 54.

    1. The journal editor printed (that) the article contained too much illicit material.

    2. The journal editor printed the article despite his doubts about it.

      Did the editor publish the article in a newspaper?

  • 55.

    1. The local publisher printed (that) the quote might not be completely accurate.

    2. The local publisher printed the quote against the editor’s better judgment.

      Did the publisher retract the quote?/Did the publisher print a quote?

  • 56.

    1. The new owners insured (that) the house did not flood very easily.

    2. The new owners insured the house before moving their furniture in.

      Did the owners just buy the house?

  • 57.

    1. The cautious driver insured (that) the vehicle contained an anti-theft device.

    2. The cautious driver insured the vehicle against accidents and natural disasters.

      Was the driver reckless?

  • 58.

    1. The confident engineer maintained (that) the machinery might be very easily stolen.

    2. The confident engineer maintained the machinery during his long employment here.

      Could the machinery be taken easily?/Did the engineer work at his job long?

  • 59.

    1. The devoted caretaker maintained (that) the garden should not be watered much.

    2. The devoted caretaker maintained the garden while managing the entire estate.

      Did the estate have a loyal caretaker?/Did the caretaker manage the entire estate?

  • 60.

    1. The gossipy neighbor heard (that) the story could not possibly be true.

    2. The gossipy neighbor heard the story when she asked about it.

      Did the neighbor keep secrets well?

  • 61.

    1. The two hunters heard (that) the birds began migrating in early October.

    2. The two hunters heard the birds while out in the forest.

      Were there three hunters?

  • 62.

    1. The art critic wrote (that) the interview did not go very well.

    2. The art critic wrote the interview while sitting at the computer.

      Was the critic of art writing about the interview?/Was the critic of art writing a novel?

  • 63.

    1. The popular novelist wrote (that) the essay changed the minds of many.

    2. The popular novelist wrote the essay after listening to the facts.

      Did a newspaper writer write the essay?

  • 64.

    1. The lab technician proposed (that) the idea might be worth another try.

    2. The lab technician proposed the idea despite it being extremely unpopular.

      Did the technician work in a lab?

  • 65.

    1. The city planners proposed (that) the strategy should not include land sales.

    2. The city planners proposed the strategy after a county commissioner meeting.

      Was the town council working on the strategy?

  • 66.

    1. The surgical nurses protested (that) the policy could cause problems for patients.

    2. The surgical nurses protested the policy when they met with management.

      Were the nurses against the policy?

  • 67.

    1. The political group protested (that) the treaty would allow nations to cheat.

    2. The political group protested the treaty when they marched in Washington.

      Was the political group in favor of the treaty?

  • 68.

    1. The frustrated tourists understood (that) the message meant they would be delayed.

    2. The frustrated tourists understood the message despite it being spoken softly.

      Did the tourists understand the message?

  • 69.

    1. The wise consumer understood (that) the label presented the facts very clearly.

    2. The wise consumer understood the label after reading it through twice.

      Did the foolish consumer misunderstand the label?

  • 70.

    1. The trained referees warned (that) the spectators did not respect fair play.

    2. The trained referees warned the spectators against heckling the other players.

      Were the spectators well behaved?

  • 71.

    1. The angry residents warned (that) the kids did not respect others’ property.

    2. The angry residents warned the kids when they disturbed their sleep.

      Did the kids respect others’ property?/Did the residents sleep undisturbed?

  • 72.

    1. The elderly woman forgot (that) the address changed a short time ago.

    2. The elderly woman forgot the address while driving her friend home.

      Did the elderly woman forget the address?

  • 73.

    1. The substitute forecaster forgot (that) the key looked gray on the prompter.

    2. The substitute forecaster forgot the key while giving the weather report.

      Was the regular forecaster giving the weather?

  • 74.

    1. * The foreign visitor found (that) the entrance could not be seen easily.

    2. * The foreign visitor found the entrance despite it being fairly hidden.

      Was the entrance easy to find?

  • 75.

    1. The excited tourist found (that) the statues didn’t look like their pictures.

    2. The excited tourist found the statues when he looked between buildings.

      Was the tourist viewing paintings?

  • 76.

    1. The armed gunman repeated (that) the threat should be taken very seriously.

    2. The armed gunman repeated the threat after the hostages grew quiet.

      Did the threatening man have a gun?

  • 77.

    1. The French teacher repeated (that) the poem should be finished by Friday.

    2. The French teacher repeated the poem while the students copied it.

      Was the instructor teaching a Spanish class?

  • 78.

    1. The determined students learned (that) the equations helped them to pass physics.

    2. The determined students learned the equations while studying for the exam.

      Were the students taking physics?/Were the students taking an exam?

Appendix 2 Verb Properties

Direct-object bias Clause-bias That Pref Other
Clause-bias Verbs
admitted .11 .61 .63 .28
argued .11 .35 .89 .54
assumed .10 .90 .57 .00
believed .14 .50 .61 .36
claimed .06 .68 .58 .26
concluded .14 .81 .83 .05
confessed .20 .49 .71 .32
decided .02 .15 .69 .83
figured .07 .48 .53 .46
hinted .01 .64 .83 .35
implied .08 .90 .83 .02
indicated .27 .70 .71 .03
inferred .10 .76 .88 .14
prayed .00 .37 .80 .63
pretended .01 .25 .52 .74
proved .23 .61 .62 .16
realized .19 .80 .62 .01
suggested .21 .73 .69 .07
suspected .30 .68 .60 .02
worried .00 .24 .71 .76
Direct-object bias Verbs
accepted .97 .02 1.00 .01
advocated .87 .05 1.00 .08
asserted .66 .31 .94 .03
confirmed .74 .26 .86 .00
discovered .70 .30 .69 .00
emphasized .79 .18 .74 .03
established .94 .06 1.00 .00
forgot .37 .03 1.00 .60
found .90 .07 .86 .03
heard .76 .16 .82 .08
insured .85 .13 .86 .02
learned .60 .19 .95 .21
maintained .74 .23 .80 .03
printed .77 .01 1.00 .22
proposed .46 .16 .88 .38
protested .60 .11 .92 .29
repeated .94 .05 .60 .01
understood .91 .09 1.00 .00
warned .76 .11 .83 .13
wrote .90 .00 1.00 .10

Appendix 3 Additional reading time measures

Table 8.

Mean self-paced reading times in Experiment 1 (Msec) uncorrected for length and untrimmed

Subject
NP
admitted/
confirmed
(that) Post-
verb NP
Disambig. Rest of
sentence
Clause-bias verb + that + Clause 380 427 380 352 372 291
Clause-bias verb + Clause 377 412 - 388 385 298
Clause-bias verb + DO 381 411 - 389 392 301
DO-bias verb + that + Clause 367 441 382 361 372 290
DO-bias verb + Clause 377 434 - 378 414 301
DO-bias verb + DO 380 430 - 382 373 299

Table 9.

Mean first-pass reading times in Experiment 2 (Msec) uncorrected for length and untrimmed

Subject
NP
admitted/
confirmed
(that) Post-verb
NP
Disambig. Rest of
sentence
Clause-bias verb + that + Clause 752 316 245 361 383 619
Clause-bias verb + Clause 757 317 - 390 400 621
Clause-bias verb + DO 763 325 - 398 389 664
DO-bias verb + that + Clause 760 324 256 363 370 670
DO-bias verb + Clause 749 334 - 379 416 616
DO-bias verb + DO 748 333 - 396 394 691

Table 10.

Mean regression-path reading times in Experiment 2 (Msec) uncorrected for length and not trimmed at 3 SD

admitted/
confirmed
(that) Post-verb
NP
Disambig.
Clause-bias verb + that + Clause 377 280 466 448
Clause-bias verb + Clause 389 481 488
Clause-bias verb + DO 383 - 485 524
DO-bias verb + that + Clause 386 288 488 465
DO-bias verb + Clause 382 - 471 561
DO-bias verb + DO 391 - 465 481

Table 11.

Mean total reading times in Experiment 2 (Msec) uncorrected for length and not trimmed at 3 SD

Subject
NP
admitted/
confirmed
(that) Post-verb
NP
Disambig. Rest of
sentence
Clause-bias verb + that + Clause 981 425 222 526 616 932
Clause-bias verb + Clause 991 467 - 556 624 868
Clause-bias verb + DO 980 495 - 580 575 927
DO-bias verb + that + Clause 986 447 265 547 562 910
DO-bias verb + Clause 1001 539 - 647 730 994
DO-bias verb + DO 957 470 - 539 571 925

Table 12.

Mean residual second-pass reading times in Experiment 2 (Msec). Data is not trimmed at 3 SD

Subject
NP
admitted/
confirmed
(that) Post-verb
NP
Disambig Rest of
sentence
Clause-bias verb + that + Clause −90 −37 −18 −43 32 87
Clause-bias verb + Clause −86 −9 - −32 21 74
Clause-bias verb + DO −83 20 - −16 −14 42
DO-bias verb + that + clause −86 −10 −13 −22 −1 73
DO-bias verb + Clause −63 63 - 86 95 146
DO-bias verb + DO −85 0 - −37 −25 41

Description of analysis of variance results at the first-word of the disambiguating region

1. Experiment I

The first disambiguating word in sentences with Clause continuations was read 26 msec faster after Clause-bias verbs than after DO-bias verbs, exceeding or equaling the 95% confidence interval on the difference of 21 msec by participants and 26 msec by items. There was also a small difference in the opposite direction in sentences with DO continuations, whose first disambiguating word was read 8 msec slower after Clause-bias verbs than after DO-bias verbs, but this difference did not exceed the 95% confidence interval by participants or items. The pattern led to an interaction between verb bias and continuation type that was reliable by participants but marginal by items (F1(1,53) = 4.1, p < .05; F2(1,74) = 3.3, p < .1; MinF′(1,119) = 1.8, p > .1). These results are similar to results reported in the paper for the two-word disambiguating region.

2. Experiment II

The pattern of results was similar when uncorrected first-pass times on just the first word of the disambiguating region were analyzed. Uncorrected first-pass times on just the first disambiguating word were 20 msec longer by participants when a Clause continuation followed a DO-bias verb than when it followed a Clause-bias verb, exceeding the 16-msec 95% confidence interval on the difference by participants, and 24 msec longer by items, exceeding the 23-msec 95% confidence interval by items. There was a 9-msec difference in the opposite direction in sentences with DO continuations, which did not exceed the 95% confidence interval on the difference by either participants or items. Just as in the analysis on the two-word region, this pattern led to a reliable interaction between verb bias and sentence continuation (F1(1,74) = 6.8, p < .05; F2(1,76) = 4.9, p < .05; MinF′(1,145) = 2.8, p < .1). These results are similar to the results reported in the paper for the two-word disambiguating region.

For regression-path times, the interaction between verb bias and sentence continuation type was reliable (F1(1,74) = 8.2, p < .01; F2(1,76) = 7.3, p < .01; MinF′(1,149) = 3.9, p < .05). Also significant were both the 51 msec longer spent on Clause continuations following DO-bias verbs than following Clause-bias verbs (exceeding the 95% confidence interval of 41 msec by participants and 44 msec by items), and the 32 msec longer spent on DO continuations following Clause-bias verbs than following DO-bias verbs (exceeding the 95% confidence interval on the difference of 31 msec by participants, but not the 42-msec 95% confidence interval by items). These results are similar to the results reported in the paper for the two-word disambiguating region.

Results of the first-word of the disambiguating region was not calculated for total times, since this analysis is designed to tap in only to results of earliest processing (see text).

Description of analysis of variance results for the subset of items in which plausibility was equated

1. Experiment I

Clause continuations were read 24 msec faster after Clause-bias verbs than after DO-bias verbs, exceeding the 95% confidence interval on the difference of 15 msec by participants and 19 msec by items. There was also a 17-msec difference in the opposite direction in sentences with DO continuations, exceeding the 95% confidence intervals of 16 msec by both participants and items. These results are similar to results reported in the paper for the two-word disambiguating region.

2. Experiment II

In the analysis of length-corrected first-pass times on the two-word disambiguating region for just the subset of 56 items that equated rated plausibility across verb type, the pattern of results differ somewhat from the results reported for all items in the paper in first-pass times. As reported for all items in the paper, Clause continuations following DO-bias verbs were read 23 msec slower than when they followed Clause-bias verbs by participants, exceeding the 95% confidence interval on the difference of 22 msec by participants, 25 msec slower by items, equaling the 25-msec 95% confidence interval on the difference by items. However, there was no numeric difference in the opposite direction in sentences with DO continuations (just 1 msec by participants and 2 msec by items). The fact that these effects are weaker after approximately 28% of all items are removed suggests lack of power in this post-hoc analysis (see text) that is especially prominent in the condition with weaker effects to begin with.

For regression-path times, the interaction between verb bias and sentence continuation was reliable only by participants (F1(1,74) = 6.1, p < .05; F2(1,76) = 1.8, p >.1; MinF′(1,111) = 1.4, p > .1). The pairwise comparisons showed that regression-path times were slower when a Clause continuation followed a DO-bias verb (by 57 msec exceeding the 95% confidence interval on the difference of 33 msec by participants and 42 msec by items), but there was a smaller difference of 12 msec in the opposite direction when a DO continuation followed a Clause-bias verb. This did not exceed the 95% confidence intervals of 32 msec by participants and 69 msec by items. This again suggests lack of power in this post-hoc analysis.

Results of the first-word of the disambiguating region was not calculated for total times, since this analysis is designed to tap in only to results of earliest processing (see text).

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Binder KS, Duffy SA, Rayner K. The effects of thematic fit and discourse context on syntactic ambiguity resolution. Journal of Memory and Language. 2001;44:297–324. [Google Scholar]
  2. Clifton C, Traxler MJ, Mohamed MT, Williams RS, Morris RK, Rayner K. The use of thematic role information in parsing: Syntactic processing autonomy revisited. Journal of Memory & Language. 2003;49(3):317–334. [Google Scholar]
  3. Farmer TA, Anderson SE, Spivey MJ. Gradiency and visual context in syntactic garden-paths. Journal of Memory & Language. 2007;57:570–595. doi: 10.1016/j.jml.2007.04.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Ferreira F, Henderson J. Use of verb information in syntactic parsing: Evidence from eye movements and word-by-word self-paced reading. Journal of Experimental Psychology: Learning, Memory, & Cognition. 1990;16(4):555–568. doi: 10.1037//0278-7393.16.4.555. [DOI] [PubMed] [Google Scholar]
  5. Francis WN, Kucera H. Frequency analysis of English usage: Lexicon and Grammar. Boston: Houghton Mifflin Company; 1982. [Google Scholar]
  6. Frazier L. Exploring the architecture of the language-processing system. In: Altmann GTM, editor. Cognitive Models of Speech Processing: Psycholinguistic and Computational Perspectives. Cambridge: MIT Press; 1990. pp. 383–408. [Google Scholar]
  7. Frazier L. Constraint satisfaction as a theory of sentence processing. Journal of Psycholinguistic Research. 1995;24(6):437–468. doi: 10.1007/BF02143161. [DOI] [PubMed] [Google Scholar]
  8. Frazier L, Fodor J. The sausage machine: A new two-stage parsing model. Cognition. 1978;6(4):291–325. [Google Scholar]
  9. Frazier L, Rayner K. Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology. 1982;14(2):178–210. [Google Scholar]
  10. Garnsey SM, Pearlmutter NJ, Myers E, Lotocky MA. The contributions of verb bias and plausibility to the comprehension of temporarily ambiguous sentences. Journal of Memory and Language. 1997;37(1):58–93. [Google Scholar]
  11. Hare M, McRae K, Elman JL. Sense and structure: Meaning as a determinant of verb subcategorization preferences. Journal of Memory and Language. 2003;48(2):281–303. [Google Scholar]
  12. Henderson JM, Ferreira F. Use of verb information in syntactic parsing: Evidence from eye movements and word-by-word self-paced reading. Journal of Experimental Psychology: Learning, Memory, & Cognition. 1990;16(4):555–568. doi: 10.1037//0278-7393.16.4.555. [DOI] [PubMed] [Google Scholar]
  13. Holmes VM, Forster KI. Perceptual complexity and underlying sentence structure. Journal of Verbal Learning & Verbal Behavior. 1972;11(2):148–156. [Google Scholar]
  14. Holmes VM, Kennedy A, Murray WS. Syntactic structure and the garden path. Quarterly Journal of Experimental Psychology. 1988;39(A2):277–293. [Google Scholar]
  15. Holmes VM, Stowe L, Cupples L. Lexical expectations in parsing complement-verb sentences. Journal of Memory & Language. 1989;28(6):668–689. [Google Scholar]
  16. Jennings F, Randall B, Tyler LK. Graded effects of verb subcategory preferences on parsing: Support for constraint-satisfaction models. Language & Cognitive Processes. 1997 Aug;121997(4):485–504. [Google Scholar]
  17. Kennedy A, Murray WS, Jennings F, Reid C. Parsing complements: Comments on the generality of the principle of minimal attachment. Language and Cognitive Processes. 1989;4:51–76. [Google Scholar]
  18. Kennison S. Limitations on the use of verb information during sentence comprehension. Psychonomic Bulletin & Review. 2001;8(1):132–138. doi: 10.3758/bf03196149. [DOI] [PubMed] [Google Scholar]
  19. Liversedge SP, Paterson KB, Pickering MJ. Eye movements and measures of reading time. In: Underwood G, editor. Eye Guidance in Reading and Scene Perception. NY: Elsevier Publishing; 1998. pp. 55–75. [Google Scholar]
  20. MacDonald MC, Pearlmutter NJ, Seidenberg MS. The lexical nature of syntactic ambiguity resolution. Psychological Review. 1994;101(4):676–703. doi: 10.1037/0033-295x.101.4.676. [DOI] [PubMed] [Google Scholar]
  21. McElree B, Nordlies J. Literal and figurative interpretations are computed in equal time. Psychonomic Bulletin & Review. 1999;6(3):486–494. doi: 10.3758/bf03210839. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Mitchell DC, Holmes VM. The role of specific information about the verb in parsing sentences with local structural ambiguity. Journal of Memory & Language. 1985;24(5):542–559. [Google Scholar]
  23. Osterhout L, Holcomb PJ, Swinney DA. Brain potentials elicited by garden-path sentences: Evidence of the application of verb information during parsing. Journal of Experimental Psychology: Learning, Memory, & Cognition. 1994;20(4):786–803. doi: 10.1037//0278-7393.20.4.786. [DOI] [PubMed] [Google Scholar]
  24. Pickering MJ, Traxler MJ. Plausibility and recovery from garden paths: An eye-tracking study. Journal of Experimental Psychology: Learning, Memory, & Cognition. 1998;24(4):940–961. [Google Scholar]
  25. Pickering MJ, Traxler MJ, Crocker MW. Ambiguity resolution in sentence processing: Evidence against frequency-based accounts. Journal of Memory and Language. 2000;43:447–475. [Google Scholar]
  26. Rayner K, Frazier L. Parsing temporarily ambiguous complements. Quarterly Journal of Experimental Psychology. 1987;39(4A):657–673. [Google Scholar]
  27. Schmauder AR, Egan MC. The influence of semantic fit on on-line sentence processing. Memory & Cognition. 1998;26(6):1304–1312. doi: 10.3758/bf03201202. [DOI] [PubMed] [Google Scholar]
  28. Schneider D, Philips C. Grammatical search and reanalysis. Journal of Memory & Language. 2001;45(2):308–336. [Google Scholar]
  29. Sereno SC, Rayner K. Fast priming during eye fixations in reading. Journal of Experimental Psychology: Human Perception & Performance. 1992;18(1):173–184. doi: 10.1037//0096-1523.18.1.173. [DOI] [PubMed] [Google Scholar]
  30. Sturt P, Pickering MJ, Scheepers C, Crocker MW. The preservation of structure in language comprehension: Is reanalysis the last resort? Journal of Memory & Language. 2001;45(2):283–307. [Google Scholar]
  31. Sturt P, Pickering MJ, Crocker MW. Structural change and reanalysis difficulty in language comprehension. Journal of Memory & Language. 1999;40(1):136–150. [Google Scholar]
  32. Traxler MJ, Pickering MJ. Case-marking in the parsing of complement sentences: Evidence from eye movements. Quarterly Journal of Experimental Psychology. 1996;49A(4):991–1004. doi: 10.1080/713755674. [DOI] [PubMed] [Google Scholar]
  33. Trueswell JC, Tanenhaus MK. Toward a lexicalist framework of constraint-based syntactic ambiguity resolution. In: Clifton C, Frazier L, editors. Perspectives on Sentence Processing. NJ: Lawrence Erlbaum & Associates; 1994. pp. 155–179. [Google Scholar]
  34. Trueswell JC, Tanenhaus MK, Garnsey SM. Semantic influences on parsing: Use of thematic role information in syntactic ambiguity resolution. Journal of Memory and Language. 1994;33(3):285–318. [Google Scholar]
  35. Trueswell JC, Tanenhaus MK, Kello C. Verb-specific constraints in sentence-processing: Separating effects of lexical preference from garden-paths. Journal of Experimental Psychology: Learning, Memory, & Cognition. 1993;19(3):528–553. doi: 10.1037//0278-7393.19.3.528. [DOI] [PubMed] [Google Scholar]
  36. Trueswell JC, Kim AE. How to prune a garden path by nipping it in the bud: Fast priming of verb argument structure. Journal of memory and language. 1998;39:102–123. [Google Scholar]
  37. Van Berkum JJ, Brown CM, Hagoort P. Early referential context effects in sentence processing: Evidence from event-related brain potentials. Journal of memory and language. 1999;41:147–182. [Google Scholar]

RESOURCES