Abstract
In this study, we examined how function and content words are read in intra- and interlingual subtitles. We monitored eye movements of a group of 39 deaf, 27 hard of hearing, and 56 hearing Polish participants while they viewed English and Polish videos with Polish subtitles. We found that function words and short content words received less visual attention than longer content words, which was reflected in shorter dwell time, lower number of fixations, shorter first fixation duration, and lower subject hit count. Deaf participants dwelled significantly longer on function words than other participants, which may be an indication of their difficulty in processing this type of words. The findings are discussed in the context of classical reading research and applied research on subtitling.
Although audiovisual media are now part and parcel of modern life, and the rise of solutions that make media more accessible to people who are deaf or hard of hearing (D/HH) has no parallel in the past, the actual research on how these media are perceived and processed by viewers is still lagging behind the vast body of high-quality studies available on reading printed text (for an overview, see Rayner, 1998).
The reading of subtitles 1 is a process markedly different from reading printed text as it requires parallel processing of several sources of information which are beyond the reader’s control: the on-screen action, the soundtrack, and the subtitles (for more on the subtitle reading process, see Kruger, Szarkowska, & Krejtz, 2015). In printed text, readers are free to decide on the pace of reading the text and they can freely return with their gaze to re-read something. When reading subtitles, however, people have no option to backtrack and they need to accommodate their reading styles to “fleeting text on a dynamic background” (Kruger & Steyn, 2013, p. 105), having no control over the presentation speed of subtitles (for more on subtitle presentation rates, see Szarkowska, Krejtz, Pilipczuk, Dutka, & Kruger, forthcoming).
The central goal of this study is to examine how content and function words in Polish are read in intralingual (Polish to Polish) and interlingual (English to Polish) subtitles. First, by monitoring eye movements of three groups of adults: hearing, hard of hearing, and deaf while they watched subtitled videos, we aimed to establish whether the three groups of participants differ in their style of reading function and content words. Second, we wanted to find parallels between the process of reading content and function words in printed text and that of subtitles. To achieve these two objectives, we analyzed the comprehension of videos with intralingual and interlingual subtitles, and the eye movement characteristics calculated for function versus content words, such as subject hit count (i.e., the percentage of subjects who looked at a specific word), first fixation duration (FFD), fixation count, and dwell time. To the best of our knowledge, no previous eye tracking studies had been conducted on subtitled videos to test differences in how content and function words are read in subtitles by deaf, hard of hearing, and hearing people. Better understanding of how function and content words in intra- and interlingual subtitles are processed may contribute to educational potential of learning languages through reading subtitles.
Function vs. Content Words
Words in language can traditionally be divided into two major groups on the basis of their grammatical behavior and main functions: content words (also known as lexical words) and function words (sometimes also referred to as grammatical words) (Biber, Johansson, Leech, Conrad, & Finegan, 1999; Rayner, Pollatsek, Ashby, & Clifton, 2012). Content words constitute an open class, that is, their number in the language is not fixed, and new lexical items constantly enrich the lexicon. In spoken language, content words are usually stressed. Although content words may comprise one morpheme only, they tend to have a complex internal structure, which is a result of processes such as inflection, derivation, and compounding. Consequently, content words are normally longer than function words. This group of words includes nouns, lexical verbs, adjectives, and adverbs (Biber et al., 1999; Krzeszowski, 1994).
As opposed to content words, function words are a closed group, resistant to inclusion of new members. Function words constitute a substantially less numerous group of lexemes compared to content words; yet, they tend to appear extraordinarily often. Statistically a 190-word text in English consists of as many as 90 function words (Krzeszowski, 1994). Function words usually have a simple structure and consist of a single morpheme only; in consequence, they are mostly very short and tend to be more abstract. In spoken language, they are usually unstressed.
Even though the notion of content and function words is by and large the same in English and in Polish: “while lexical words are the main building blocks of texts, function words provide the mortar which binds the text together” (Biber et al., 1999, p. 55), the two languages differ in terms of internal structure and formal classifications. As noted by Krzeszowski (1994), the use of function words in the English language is considerably wider compared to the Polish language. Function words in English include parts of speech such as determiners, pronouns, primary auxiliaries, modal auxiliaries, prepositions, adverbial particles, coordinating conjunctions and subordinating conjunctions, wh-words, existential there, the negator no, the infinitive marker to, and numerals (including ordinals and cardinals) (Biber et al., 1999). Unlike in English, auxiliaries, pronouns, and numerals are classified in Polish as content words. As a result, the category of function words is less numerous in Polish than in English; some of the Polish function words may also be longer than one syllable. In this study, we control for the length of content words, assuming that the difference between short content words and function words may not be pronounced in the Polish language.
Reading Content and Function Words
When reading, people make two major types of eye movements: saccades and fixations. Saccades are rapid eye movements during which our gaze moves from one area to another. Fixations are periods of time between two consecutive saccades when our eyes are relatively still and when we are able to take in new information. When reading a printed text, our eyes usually stop to make fixations of about 200–300ms; when exploring a scene, however, fixations can last from under 100 to over 500ms (Rayner, 1998). Fixation duration and the number of fixations on words (also known as fixation count) are treated as indices of information processing difficulty and of attentional focus. Longer fixation duration and more fixations on words are indicators of higher processing difficulty.
A number of eye tracking studies on reading printed text have demonstrated that not all words are fixated equally: our eyes do not stop on every single word in the text fixating some words, but skipping some others. The probability of fixating a word depends on its length (longer words tend to be fixated more often), its frequency in language (high-frequency words are more often skipped than low-frequency words), its predictability in context (highly predictable words are more likely to be skipped) as well as on the reader’s familiarity with the word (see Brysbaert & Vitu, 1998; Rayner & McConkie, 1976; Rayner et al., 2012; Schotter & Rayner, 2012). Nonfixated words are often function words. When function words are fixated, however, people look less at them, making fewer fixations than in the case of content words (Roussel, Rohr, Raufaste, & Nespoulous, forthcoming; see also Rayner & Duffy, 1988; Rayner & McConkie, 1976).
The process of reading content and function words in English was examined by Carpenter and Just (1983) and Just and Carpenter (1987). Their first experiment on hearing readers (Carpenter & Just, 1983) showed that they sampled the text very densely and that they skipped relatively few words, most of which were function words like “the” and “and.” At the same time, almost all content words were fixated at least once. When three-letter content words were compared to function words of the same length, content words were more likely to be fixated (57% vs. 40%, respectively). All in all, readers fixated 83% of the content words and only 38% of the function words.
Another study on content and function words compared three types of reading: normal reading, speed reading, and skimming (Just & Carpenter, 1987). Normal readers were found to sample the text very densely (they fixated 64% of the words), unlike speed readers (33% of the words) and skimmers (40% of the words). Normal readers tended to skip short function words, whereas rapid readers skipped several words at once. Normal readers fixated 77% of the content words and nearly half as many function words (Just & Carpenter, 1987). When three-letter function and content words were compared, function words were still less likely to be fixated than content words.
In a more recent experiment, Roussel et al. (forthcoming) examined the question whether function words receive less visual attention than content words (in terms of the number and duration of fixations) and whether the grammatical class affected the readers’ eye movements regardless of word length, frequency and word-form. The readers in their study fixated 61.4% content words and 47.9% function words. The authors also found that grammatical class affected fixation numbers independently of frequency and word length. There were fewer fixations on function words than on content words, even when frequency and word-form were controlled.
All the above shows important differences between how hearing people read function and content words in printed text. In this study, we wanted to see whether similar differences are present in the reading of subtitles among hearing, deaf, and hard of hearing participants. Before presenting our results, we first look into the impact of deafness on reading, particularly reading function words.
Deafness and Reading
Deafness can be an important predictor of lower language proficiency and lower efficiency of text processing, especially word decoding, which may result in lower educational and achievement levels (Gonter Gaustad, 2000; Moeller, Tomblin, Yoshinaga-Itano, Connor, & Jerger, 2007; Traxler, 2000; Trybus & Karchmer, 1977; Waters & Doehring, 1990; Wauters, van Bon, & Tellings, 2006). Musselman (2000, p. 9) calls language delay “the hallmark of deafness” and Mayberry, del Giudice, and Lieberman (2011) argue that “the median reading level of deaf students indicates subpar achievement” (p. 164). Nelson and Camarata (1996) state that only a fraction of children with profound and severe prelingual hearing loss will be able to acquire high literacy skills regardless of the country in which they are raised. A number of studies showed that deaf students tend to be less proficient in reading and writing a spoken language compared to the hearing (Albertini & Mayer, 2011; Antia, Jones, Reed, & Kreimeyer, 2009; Karchmer & Mitchell, 2003; Marschark, 1993; Marschark, Lang, & Albertini, 2002; Qi & Mitchell, 2012; Schirmer & McGough, 2005). One of the features that distinguishes deaf people from the hearing in terms of language proficiency is their (mis)use of function words. It has been found that in English, deaf people may have difficulties with articles—they tend to overuse the definite article and avoid the indefinite article (Channon & Sayers, 2007; Wolbers, Dostal, & Bowers, 2012). They also have problems with demonstratives, determiners, and dependent clause markers, and they demonstrate low mastery of third-person pronouns (Channon & Sayers, 2007).
Function words may be problematic to deaf people as they are often polysemous. Their meaning is highly context dependent and they have “low fixed semantic content outside of specific context in which they occur” (Channon & Sayers, 2007, p. 92). Deaf and hard of hearing (DHH) children may find it more difficult to learn function words compared to their hearing counterparts (Trezek, Wang, & Paul, 2010). Given that function words are usually short and unstressed and that they tend to be contracted in speech (I have done → I’ve done), they are more difficult to identify in speech and to lip-read compared to content words. The fact that many function words do not exist in sign languages makes them even more difficult for deaf people to master.
Some of the reasons for the difficulties that many deaf people may have with acquiring oral language can be attributed to differences in the way they acquire language. Whereas hearing children acquire language through natural, meaningful daily interaction with proficient language users in the process of growing up, deaf children do not naturally acquire oral language through incidental learning processes, such as overhearing everyday conversation (Singleton, Morgan, DiGello, Wiles, & Rivers, 2004). In consequence, they have fewer word-learning opportunities (Coppens, Tellings, Schreuder, & Verhoeven, 2013; Fagan & Pisoni, 2010) and limited access to language via the acoustic input (Wolbers et al., 2012).
Reading skills are positively correlated with phonological awareness (Dillon, de Jong, & Pisoni, 2012). Thanks to cochlear implants, deaf people can gain access to auditory signal, including speech, which allows them to develop phonological representations and phonological awareness (Dillon et al., 2012). Cochlear implants have been found to improve the perception and production of speech (López-Higes, Gallego, Martín-Aragoneses, & Melle, 2015) and to foster vocabulary acquisition (Connor, Craig, Raudenbush, Heavner, & Zwolan, 2006), which together have a positive impact on reading skills. Cochlear implants also enable children with hearing loss greater exposure to spoken language (Hayes, Kessler, & Treiman, 2011).
We believe that subtitled videos can be yet another form of exposure to linguistic input in both written and oral form which can help children who are D/HH acquire spoken language. In this study, we wanted to verify whether deaf people differed from other participants in their processing of function words in subtitles.
Rationale for the Study/Watching Subtitled Videos
Previous eye tracking studies on reading and processing subtitles demonstrated that the appearance of subtitles changes the way people watch films: the film viewing process becomes a reading process (Jensema, Sharkawy, Danturthi, Burch, & Hsu, 2000). Looking at subtitles at their onset is, as argued by d’Ydewalle and de Bruycker (2007, p. 196), “more or less obligatory.” It has also been found that people gaze at subtitles regardless of their knowledge of the language of the audio track (d’Ydewalle & de Bruycker, 2007; d’Ydewalle, Praet, Verfaillie, & van Rensbergen, 1991), their familiarity with subtitles (d’Ydewalle et al., 1991), or the availability of the soundtrack (d’Ydewalle, van Rensbergen, & Pollet, 1987; van Lommel, Laenen, & d’Ydewalle, 2006). This prompted d’Ydewalle et al. (1991) to famously state that reading subtitles is an automatic behavior.
Subtitle reading process has also been found to largely depend on viewers’ literacy skills (Burnham et al., 2008). Children read differently than adults (Baker, 1985; Cambra, Leal, & Silvestre, 2013; d’Ydewalle & De Bruycker, 2007). Similarly, hearing status is an important indicator of literacy (Burnham et al., 2008): hearing viewers tend to outperform those with hearing loss in reading tests, including subtitle reading (Easterbrooks & Stephenson, 2006; Kelly, 2003; Stewart, 1984).
In eye tracking research, higher reading proficiency is indicated by shorter dwell time, lower fixation count, and shorter fixation duration (Rayner, 1998). In some previous eye tracking studies on reading subtitles, DHH viewers were found to differ from hearing viewers in their eye movement patterns when they were watching subtitled videos, as manifested by longer fixation durations, higher number of fixations, and longer dwell time in the subtitle area (Szarkowska, Krejtz, Kłyszejko, & Wieczorek, 2011).
In this study, we examined how function and content words were read in subtitles by three groups of subjects: deaf, hard of hearing, and hearing who watched short clips with either intralingual or interlingual subtitles. Our first prediction was that interlingual English-to-Polish subtitles would be more difficult to process than intralingual Polish-to-Polish subtitles as DHH participants will not be able to take advantage of lip reading. Because the reading of subtitled text is more cognitively complex than that of printed text—owing to the fact that apart from reading the subtitles, viewers also need to follow the action on the screen—we expected that the number of words skipped in reading subtitles would be higher compared to the number of words skipped in reading printed text, as reported in previous studies (e.g., Just & Carpenter, 1987). By monitoring the eye movements of the three groups of subjects, we wanted to find whether DHH people would exhibit different reading patterns of function and content words in subtitles compared to hearing people, and whether this would be reflected in their comprehension scores. Finally, given the vast body of previous studies, we hypothesized that the DHH people would have lower comprehension scores than the hearing, and that the time they spend on reading subtitles would be higher (dwell time, fixation count) compared to the hearing.
To recap, our research was guided by two main research questions. First, we wanted to find how content and function words were read in subtitles in order to identify parallels between the process of reading subtitles and printed text. Second, we investigated if there were differences in eye gaze patterns and comprehension accuracy of deaf, hard of hearing, and hearing adults while watching videos and reading subtitles.
Method
Participants
A group of 144 volunteers took part in the experiment, out of whom 47 were deaf, 35 were hard of hearing, and 62 were hearing (33%, 24%, and 43% respectively). Among them, 96 were female and 48 were male (67% and 33%, respectively). Due to data collection issues (e.g., poor calibration quality), some data had to be discarded and the total number of participants analyzed in this study was 122. Among them, 39 were deaf (N female = 19), 27 were hard of hearing (N female = 16), and 56 were hearing (N female = 44). The ages of the subjects ranged from 15 to 70, and there were no significant group differences in age, F (1, 119) = 2.32, ns (M deaf = 23.62, SD = 12.00; M hard of hearing = 27.26, SD = 14.57; M hearing = 26.92, SD = 12.26, respectively).
Participants were asked to self-report their hearing status by declaring themselves as either d/Deaf, hard of hearing, or hearing. Given that hearing status may be perceived as important part of the identity of many people who are deaf or have a hearing loss, some people who are medically classified as hard of hearing may feel they belong to the deaf community, and vice versa. It is for this reason that we also asked the participants to state the degree of hearing loss defined in decibels. As illustrated in Table 1, the vast majority of the participants were born with hearing loss (77% deaf and 52% hard of hearing). Others lost hearing at the perilingual stage, that is, up to the age of 5 (13% deaf and 22% hard of hearing), or at the postlingual state, that is, after acquiring an oral language (10% deaf and 26% hard of hearing). This means that for most participants, the language of subtitles in the study, that is, Polish, was a foreign language they acquired possibly after acquiring Polish Sign Language. Only in the case of a few participants was the hearing loss related to age. About 63% deaf participants and 89% of hard of hearing participants had a hearing aid, and they were using them during the study.
Table 1.
Descriptive statistics for deaf and hard-of-hearing groups
Deaf (%) | Hard of hearing (%) | |
---|---|---|
Type of hearing loss | ||
Mild (20–40 db) | 5 | 4 |
Moderate (41–70 db) | 28 | 30 |
Severe (71–90 db) | 10 | 26 |
Profound (>90 db) | 49 | 33 |
N/A | 8 | 7 |
Onset of hearing loss | ||
Prelingual | 77 | 52 |
Perilingual | 13 | 22 |
Postlingual | 10 | 26 |
Hearing aid | 63 | 89 |
Most deaf participants (95%) and nearly half of the hard of hearing attended special education schools. Participants were also asked to declare the language of communication they most frequently use in their everyday lives. Most deaf participants (71%) communicated using sign language, whereas hard of hearing participants were mostly Polish language users (67%). Because some of the videos in the study were originally in English, presented to the participants with Polish subtitles, we asked the participants to self-report their proficiency in English on 10-point scale, where 1 means “no knowledge at all” and 10 means “I am proficient.” Hearing participants declared significantly higher level of English language proficiency that the other two groups, F (2, 118) = 32.80, p < .001 (see Table 2).
Table 2.
Participants’ educational history
Deaf (%) | Hard of hearing (%) | Hearing (%) | |
---|---|---|---|
School attended | |||
No information | 2 | 0 | 0 |
Mass school | 0 | 52 | 98 |
Integrated school | 3 | 4 | 2 |
Deaf school | 95 | 44 | 0 |
Language of everyday communication | |||
Polish | 26 | 67 | 100 |
Sign language | 71 | 33 | 0 |
Other | 3 | 0 | 0 |
Mean proficiency in English (SD)a | 4.32 (2.32) | 4.5 (2.62) | 7.96 (2.4)*** |
Note.aProficiency in English was measured on 1 (no knowledge at all) to 10 (I am proficient) scale.
***p < .001.
Stimuli
Each participant viewed 12 videos, each lasting 1–2min. In this paper, we only report on the analysis of selected six videos (two feature films, two documentaries, and two news programs). Subtitles were prepared using EZTitles subtitling software. The subtitles were presented with the rate of either 12 or 15 characters per second (cps), depending on the clip.
The language of the videos was either English (N = 3), in which case participants were presented with interlingual English-to-Polish subtitles, or Polish (N = 3), in which case the subtitles were intralingual (Polish to Polish). The choice of both intra- and interlingual subtitles was meant to reflect the current situation on the Polish audiovisual market, where viewers (both hearing, deaf, and hard of hearing) are presented with those two types of subtitles.
Procedure
Participants were tested individually. All instructions were provided in written Polish and were displayed on a monitor. No communication problems in understanding the task were noted. First, participants were asked to sign a written consent form to take part in the study. They were instructed to watch the videos carefully as they would have to answer comprehension questions. The test began with a few questions eliciting demographic data.
After viewing each video, participants had to answer three multiple-choice questions testing their comprehension. The questions were related to the information found only in subtitles and could not have been inferred from the image. They were displayed on the screen monitor (using the questionnaire functionality of the SMI Experiment Centre); the subjects had to choose the answer (a, b, c, or d) by clicking on it. All questions were presented in written Polish.
Finally, all participants received promotion kits from the University of Warsaw as an incentive for their participation in the study. An experiment session with one participant lasted 40–50min depending on the time a participant took to answer the questions. All authors of the present paper actively participated in conducting the study and in data analyses.
Apparatus and Eye Tracking Measures
The participants’ eye movements were recorded with SMI RED eye tracking system with a sampling rate of 120 Hz. The system uses infrared camera placed below the monitor and records gaze positions 120 times per second. Participants were seated in a chair in front of a 22-inch monitor at a distance of about 60cm (see Figure 1). Nine-point calibration and validation were performed. In order to ensure high data quality, an average deviation of 1° was the maximum value accepted during calibration. In the case of higher values, calibration was repeated. The eye tracker manufacturer’s software Experiment Center and BeGaze were used with default settings. Raw data were exported to MS Excel for further statistical processing. For statistical analysis and data preparation, IBM SPSS Statistics 22 was used.
Figure 1.
Experimental setting.
In eye tracking research on reading, it is a common practice to draw areas of interest (AOIs) separately for each word. In our study, we drew AOIs around each word in each of the subtitles included in the analysis, as shown in Figure 2. By using AOIs, we are able to calculate reading indices for each word, which allowed us to identify specific reading strategies separately for each word category. Each word was classified as either content word or function word. All content words were further grouped by the number of syllables, ranging from 1-syllable to 5- or more syllable words. For the purpose of the study, abbreviations (e.g., mln [million], r. [year]) and numerals (e.g., 3, 1980) were excluded from the analysis.
Figure 2.
Sample areas of interest on individual words in subtitles. Note. f. = function word; n. = number; a. = abbreviation; 2-syl. = 2-syllable content word.
The following eye tracking measures, which are typically used as indicators of reading behavior and processing difficulty, were calculated for each word: subject hit count, FFD, fixation count, and dwell time. Subject hit count is the percentage of subjects who looked at a specific word. FFD is the duration of the first fixation on a word. Fixation count is the total number of fixations a participant made when reading a word. Dwell time is the total time a participant spent reading a word (i.e., sum of durations of all the fixations and saccades).
Analyses were conducted using the analysis of variance (ANOVA) and analysis of covariance (ANCOVA). The results are reported with Greenhouse–Geisser correction and Bonferroni post hoc comparisons whenever necessary.
Results
Comprehension
Comprehension accuracy for intralingual and interlingual subtitles was subjected to 3×2 mixed design ANCOVA, with viewers (deaf, hard of hearing, and hearing) as between-subject factor and subtitle type (intralingual and interlingual) as a within-subject factor. We included two continuous variables: age and English language proficiency as covariates to control for (or partial out) the influence they have on the dependent variable (Field, 2005). Comprehension accuracy was calculated by taking the average of correct answers to all questions, separately for videos with interlingual and intralingual subtitles.
Effects involving both covariates, that is, English proficiency and age, turned out to be insignificant. However, the analysis revealed two main effects (see Figure 3).
Figure 3.
Average comprehension depending on subtitles type and group.
As expected, the main effect of subtitle type showed that comprehension for intralingual subtitles was significantly higher (M = 71.04, SE = 1.32) than for the interlingual subtitles (M = 65.06, SE = 1.60), F (1, 116) = 4.35, p < .05, etap 2 = 0.036. In line with our predictions, the main effect of group indicated that deaf participants had significantly lower comprehension score (M = 56, SE = 2.50) compared to hard of hearing (M = 70.35, SE = 2.70) and hearing participants (M = 78, SE = 2.20), F (2, 116) = 18.72, p < .001, etap 2 = 0.244. The difference between hard of hearing and hearing participants was not significant (p < .2).
Eye Tracking Metrics
Subject hit count
Subject hit count, that is, the percentage of participants who looked at function and content words, was analyzed with a 2×6 mixed design ANOVA with subtitle presentation rate (12 cps vs. 15 cps) as between-subject factor and word type (function word vs. 1- vs. 2- vs. 3- vs. 4- vs. 5-syllable content word) as a within-subject factor. Table 3 presents descriptive statistics of different types of words in the subtitles analyzed. There were no significant differences in the subtitle presentation rate.
Table 3.
Mean subject hit count by word type
Type of word | Mean hit count per word type (%) | SE |
---|---|---|
Function | 22.73 | 1.37 |
1-syl. content | 38.27 | 2.77 |
2-syl. content | 55.78 | 1.42 |
3-syl. content | 64.63 | 1.49 |
4-syl. content | 64.43 | 2.15 |
5-syl. content | 70.01 | 3.17 |
The main effect of word type was observed, F (5, 311) = 122.33, p < .001, eta2 = 0.672. Post hoc comparisons with Bonferroni correction revealed that 3-, 4-, and 5-syllable words received significantly more visual attention than shorter words, however there were no significant differences between these words. Significant differences were observed between function words and 1- and 2-syllable content words (see Table 3).
Fixation count
A 3×2 × 6 mixed design ANOVA was first performed on the average fixation count, with viewers (deaf, hard of hearing and hearing) and subtitle presentation rate (12 cps vs. 15 cps) as between-subject factors and word type (function word vs. 1- vs. 2- vs. 3- vs. 4- vs. 5-syllable content word) as a within-subject factor.
No major differences were observed with respect to the presentation rate. As expected, there was a strong main effect of word type, F (5, 590) = 197.52, p < .001, eta2 = 0.626 (see Table 4). All differences between words were meaningful (p < .001, post hoc with Bonferroni correction). A linear trend presenting systematic increase in the number of fixations per word was also significant F (1, 118) = 504.30, p < .001, eta2 = 0.810. Function words received the least amount of visual attention (M = 0.38, SE = 0.02), whereas 5-syllable content words were fixated the most (M = 1.78, SE = 0.08). The average number of fixations on function words was 0.38, which means that only about one in three function words was fixated. Long content words, on the other hand, had on average nearly two fixations per word (i.e., 1.78).
Table 4.
Mean number of fixations per word by word type and length
Type of word | Mean fixation count | SE |
---|---|---|
Function | 0.38 | 0.02 |
1-syl. content | 0.55 | 0.03 |
2-syl. content | 0.87 | 0.03 |
3-syl. content | 1.18 | 0.04 |
4-syl. content | 1.40 | 0.06 |
5-syl. content | 1.78 | 0.08 |
These differences were quantified by the interaction between group and word type, F (10, 590) = 6.32, p < .001, eta2 = 0.097. The differences in the mean fixation count between DHH participants on the one hand and hearing participants on the other were significant for all word types (p < .001, post hoc with Bonferroni correction), but the increase in word length affected more the DHH than hearing participants (see Figure 4). In general, when compared to DHH viewers, hearing participants made substantially fewer fixations on all types of words: hearing participants (M = 0.69, SE = 0.05), deaf (M = 1.16, SE = 0.06), and hard of hearing (M = 1.23, SE = 0.07). There was the main effect of group, F (2, 118) = 27.44, p < .001, eta2 = 0.317. There were no significant differences between the DHH groups with regard to fixation count on words.
Figure 4.
Mean fixation count on words by word type and its length. Note that 1-syl. to 5-syl. were words content words.
First fixation duration
A 3×2 × 6 mixed design ANOVA was performed on the FFD with viewers (deaf, hard of hearing, and hearing) and subtitle presentation rate (12 cps vs. 15 cps) as between-subject factors and word type (function word vs. 1- vs. 2- vs. 3- vs. 4- vs. 5-syllable content word) as a within-subject factor. If a word was skipped, the zero value for the FFD was treated as a missing value in order to avoid underestimating the duration of FFD.
In the case of FFD only group effect was meaningful, F (2, 109) = 17.46, p < .001, eta2 = 0.243. Post hoc comparisons with Bonferroni correction revealed that hearing participants had a substantially shorter FFD (M = 185.57, SE = 5.86) than deaf (M = 237.84, SE = 6.88) and hard of hearing participants (M = 219.00, SE = 8.33) (ps < .01). No major differences were observed between the deaf and the hard of hearing in terms of the duration of first fixation (p > .05). There were no differences in the two subtitle presentation rates.
Dwell time
A 3×2 × 6 mixed design ANOVA was performed on the word dwell time with viewers (deaf, hard of hearing, and hearing) and subtitle presentation rate (12 cps vs. 15 cps) as between-subject factors and word type (function word vs. 1- vs. 2- vs. 3- vs. 4- vs. 5-syllable content word) as a within-subject factor.
Similarly to the measures analyzed previously, there were no significant differences depending on the presentation rate. However, there was a strong main effect of word type, F (5, 545) = 115.72, p < .001, eta2 = 0.515. Table 5 presents descriptive statistics for this effect. As expected, the time spent on reading function words and 1-syllable words was the shortest and similar to each other (M = 296.40, SE = 12.29; M = 282.26, SE = 9.16, respectively, p < .05). Post hoc comparisons with Bonferroni correction showed that the differences in reading time between other types of words were statistically meaningful (p < .05).
Table 5.
Mean dwell time per word by word type and length
Type of word | Mean dwell time (ms) | SE |
---|---|---|
Function | 296.40 | 12.29 |
1-syl. content | 282.26 | 9.16 |
2-syl. content | 336.28 | 9.19 |
3-syl. content | 417.40 | 10.96 |
4-syl. content | 477.17 | 14.66 |
5-syl. content | 579.08 | 20.25 |
In line with our expectations, there were significant differences between groups, F (2, 109) = 49.90, p < .001, eta2 = 0.478. Hearing participants spent substantially less time reading all types of words (M = 288.43, SE = 12.87) when compared to deaf (M = 478.48, SE = 15.11) and hard of hearing participants (M = 427.39, SE = 18.31), (ps < .001). The deaf and the hard of hearing did not differ significantly from each other (p = .1).
Most importantly, the reading time was influenced by group and word type interaction, F (5, 545) = 5.00, p < .001, eta2 = 0.084. Figure 5 presents the general pattern of means. Deaf participants spent substantially (p < .001) more time reading function words (M = 377.26, SE = 20.63) than hard of hearing (M = 289.36, SE = 25.00) and hearing participants (M = 222.58, SE = 17.57). As expected, for all types of words hearing participants had shorter dwell times when compared to the other two groups.
Figure 5.
Mean dwell time per word by group and word type. Note. 1-syl. to 5-syl. words were content words.
Discussion
In this study, we aimed to analyze the process of reading content versus function words in subtitles among deaf, hard of hearing, and hearing viewers. Our findings generally corroborate the results of previous studies on reading content and function words in printed texts in English (e.g., Carpenter & Just, 1983; Just & Carpenter, 1987), but also point to important differences in reading printed text and subtitles.
As regards reading function and content words in subtitles, function words received less visual attention than content words in all groups of participants, as shown by the lower subject hit count, fixation count, and dwell time. Overall, only 23% participants fixated function words and 59% content words. Function words in our study were fixated less often than content words, which is similar to the tendency found in other studies on printed text. We need to note, however, that the percentage of people reading the text in subtitled videos is relatively low compared to people reading printed text. This shows an important difference between reading printed text and reading subtitles: when reading subtitles, people tend to skip more words, which may be attributed to the necessity to simultaneously follow the on-screen action.
Similarly to studies on reading printed texts (e.g., Rayner & McConkie, 1976; Rayner et al., 2012), we also found that “the likelihood of word skipping dramatically decreases with word length” (Vitu, 2009, p. 732). Overall, longer, 3-, 4-, and 5-syllable words received attention of 66% participants, 1- and 2-syllable words were fixated by nearly half of the sample, whereas function words were fixated only by 23% of our participants. A systematic increase in the number of fixations per type of word was also observed, long content words had nearly two fixations, compared to function words.
Subtitle comprehension and reading patterns in our study were largely dependent on hearing status. When compared to DHH viewers, hearing participants made fewer fixations on all types of words and their first fixations were significantly shorter, which resulted in substantially shorter word reading time. Shorter dwell time and lower fixation count are often taken as an indication of higher reading proficiency (e.g., Just & Carpenter, 1987; Rayner, 1998): in our study, hearing people spent the least amount of time on subtitles and had the highest comprehension scores. In contrast, DHH participants tended to spend significantly more time on reading subtitles; they had longer FFDs than the hearing for all word types, which may be an indication of slightly lower reading proficiency and may explain their lower comprehension scores. Longer dwell time is often taken as an indicator of difficulty in extracting information (Holmqvist et al., 2011).
The differences in subtitle reading patterns in terms of comprehension and eye tracking metrics among hearing, hard of hearing, and deaf participants reported here are largely consistent with previous studies (Szarkowska et al., 2011). Interpreting these group differences, we need to acknowledge that hearing participants did not have to fully rely on subtitles to be able to follow the plot, particularly when watching Polish videos. The hearing could supplement the information obtained from subtitles with the spoken dialogue, while deaf viewers had to rely solely on the visual information and subtitle text.
An important finding of this study is a significant difference between how deaf participants viewed function words compared to the hard of hearing and hearing. As shown by dwell time scores, deaf participants spent proportionally more time reading function words than hard of hearing and hearing subjects. This may be attributed to difficulties in correct lexical identification and word recognition in deaf participants.
Text comprehension relies on correct word identification and recognition. “When a printed word is recognized through association(s) in the reader’s mental lexicon, its meanings become available to the reader” (Gonter Gaustad, 2000, p. 60). Less proficient readers have been found to consume more cognitive resources when accomplishing fundamental tasks compared to skilled readers (Gonter Gaustad, 2000; Just & Carpenter, 1992). This in fact is the case of DHH subjects in this study: higher cognitive resources and processing effort are indicated by higher fixation duration, dwell time, and fixation count. Longer FFD may be a reflection of delayed or incorrect lexical identification, which is an early stage of text processing in reading. Therefore, correct identification of a word as either a function or content word is an important step in becoming a more proficient reader.
Finally, all participants achieved higher comprehension scores in the case of intralingual subtitles. We believe this result may stem from a number of factors. First, in the case of hearing and hard of hearing viewers, they took advantage of their hearing and residual hearing, respectively, to complement information presented in the subtitles with that coming from the auditory channel. It is also possible that deaf viewers, and to some extent the hard of hearing viewers, may have gained information from the visual channel, particularly in close-ups, which allowed them to lip-read. We believe that while most viewers could make use of lip reading in the case of Polish clips with intralingual subtitles, only those who knew English were able to take advantage of lip reading in the case of English clips with interlingual subtitles.
Limitations and Future Directions
An important limitation of this study is that we did not test the subjects’ reading proficiency in Polish. As a result, we were unable to analyze the subjects based on their Polish language proficiency, but on their hearing loss status only. Owing to the importance of ecological validity in this study, we wanted to allow hearing and hard of hearing people to use their hearing and residual hearing, as they would normally do when watching subtitled videos at home. Similar procedure with audio accessible to hearing and hard of hearing participants was used in previous studies (e.g., Romero-Fresco, 2015; Szarkowska et al., 2011). To verify how hearing viewers read subtitles when they are more compelled to do so, future studies could test interlingual subtitles with an unfamiliar video language (e.g., Perego, Del Missier, Porta, & Mosconi, 2010) or with muted sound.
Another important limitation is that we mostly focused on people with prelingual hearing loss. It would be interesting to examine the potential differences also in older participants with postlingual hearing loss, particularly the age-related hearing loss. Another area for further inquiry, extending our understanding of the effectiveness of technological advancements in cochlear implantology, could involve assessing the ability to read subtitles among a group of DHH individuals who use cochlear implants.
Future research could look into the processing of subtitled audiovisual material by DHH viewers considering the fact that the probability of fixating a word depends not only on its length, but also its frequency in language, the reader’s familiarity with the word and the age of acquisition (e.g., Brysbaert & Vitu, 1998; Rayner et al., 2012). Another question is how much actual reading there is when looking at the subtitles (Kruger & Steyn, 2013) and how the reading processes differ in the case of deaf readers, for instance as regards the degree of phonological coding and inner speech.
We also believe the reading of subtitles, particularly by hearing people, is a special case of language-mediated eye movements. Because the dialogues are simultaneously presented to viewers both orally and visually as subtitles, viewers’ eye movements must to some extent depend on what is being said in the dialogue and on how well the subtitles are synchronized with the dialogue. This relation is yet another area of research to be pursued in the future.
Conclusion
The present study has hopefully contributed to extending our understanding of the subtitle reading process in the context of dynamic stimuli like subtitled videos. The main findings of this study include significant differences in eye movement characteristics between function and content words in subtitling as well as systematic differences in general reading patterns between hearing viewers on the one hand and DHH viewers on the other hand. Similarly to printed text, function words in subtitles received less visual attention than content words; however, as opposed to print, the word skipping rates in subtitles were much higher, possibly resulting from the necessity to follow the on-screen action.
No previous studies were conducted on how deaf, hard of hearing, or hearing people process function and content words in subtitles, so we hope this study fills this gap and opens new research avenues. Our study also has some important educational implications. By providing DHH people with contextualized language input through nonacoustic route, together with the on-screen action, subtitled videos may help DHH people in a correct identification of words. Correct identification of words is an important step in becoming a more proficient reader, particularly in the context of function words. Educators may want to verify the usefulness of subtitled visual materials when teaching function words to DHH children across multiple contexts.
We believe that reading subtitles may positively affect language proficiency. Exposure to printed text through reading, including the reading of subtitles, may positively affect language proficiency by providing deaf viewers with an entertaining and contextualized source of natural language input. Subtitling is well known for its educational value (Díaz Cintas & Fernández Cruz, 2008; Talaván Zanon, 2006; Vanderplank, 1988). When watching subtitled videos featuring meaningful communicative situations, viewers broaden their lexicon, develop word recognition skills, and improve reading skills. Subtitled videos provide contextualized verbal communication, boost learners’ motivation, and contribute to lowering the affective filter (Krashen, 1985). Subtitling also fosters incidental language learning (Neuman & Koskinen, 1992). Just as hearing people learn foreign languages through subtitling, DHH viewers can in the same vein learn from subtitles “implicitly via a nonacoustic route” (Wolbers et al., 2012, p. 22). Deaf people themselves confirm that they take advantage of subtitling to learn language (Szarkowska & Laskowska, 2015). We consider it an interesting line of future research to see whether—and if so, how—the subtitle reading can positively affect the language proficiency of DHH people, particularly in the context of function words.
Conflicts of Interest
No conflicts of interest were reported.
Funding
Polish Ministry of Science and Higher Education (“Subtitling for the deaf and hard of hearing on digital television,” grant number IP2011 053471).
Note
In this paper, we follow European linguistic conventions and use the umbrella term “subtitling” for both intralingual transcriptions of spoken text and interlingual translation, displayed at the bottom of the screen. In some countries, such as the United States, Canada, and Australia, the term “captions” is preferred for intralingual transcriptions of spoken text for viewers who are D/HH. In this study, we tested both intra- and interlingual versions, so we find the term “subtitling” more appropriate. We only retained the term “captions” when quoting from publications where the authors themselves used this term.
References
- Albertini J., & Mayer C (2011). Using miscue analysis to assess comprehension in deaf college readers. Journal of Deaf Studies and Deaf Education, 16, 35–46. doi:10.1093/deafed/enq017 [DOI] [PubMed] [Google Scholar]
- Antia S. D. Jones P. B. Reed S., & Kreimeyer K. H (2009). Academic status and progress of deaf and hard-of-hearing students in general education classrooms. Journal of Deaf Studies and Deaf Education, 14, 293–311. doi:10.1093/deafed/enp009 [DOI] [PubMed] [Google Scholar]
- Baker R. (1985). Subtitling television for deaf children. Media in Education Research Series, 3, 1–46. [Google Scholar]
- Biber D. Johansson S. Leech G. Conrad S., & Finegan E (1999). Grammar of spoken and written English. London: Longman. [Google Scholar]
- Brysbaert M., & Vitu F (1998). Word skipping: Implications for theories of eye movement control in reading. In Underwood G. (Ed.), Eye guidance in reading and scene perception (pp. 125–147). Amsterdam, the Netherlands: Elsevier. [Google Scholar]
- Burnham D. Leigh G. Noble W. Jones C. Tyler M. Grebennikov L., & Varley A (2008). Parameters in television captioning for deaf and hard-of-hearing adults: Effects of caption rate versus text reduction on comprehension. Journal of Deaf Studies and Deaf Education, 13, 391–404. doi:10.1093/deafed/enn003 [DOI] [PubMed] [Google Scholar]
- Cambra C. Leal A., & Silvestre N (2013). The interpretation and visual attention of hearing-impaired children when watching a subtitled cartoon. Journal of Specialised Translation, 20, 134–146. [Google Scholar]
- Carpenter P., & Just M. A (1983). What your eyes do while your mind is reading. In Rayner K. (Ed.), Eye movements in reading: Perceptual and language processes (pp. 275–307). New York, NY: Academic Press. [Google Scholar]
- Channon R., & Sayers E. E (2007). Toward a description of deaf college students’ written English: Overuse, avoidance, and mastery of function words. American Annals of the Deaf, 152, 91–103. doi:10.1353/aad.2007.0018 [DOI] [PubMed] [Google Scholar]
- Connor C. M. Craig H. K. Raudenbush S. W. Heavner K., & Zwolan T. A (2006). The age at which young deaf children receive cochlear implants and their vocabulary and speech-production growth: Is there an added value for early implantation? Ear and Hearing, 27, 628–644. doi:10.1097/01 [DOI] [PubMed] [Google Scholar]
- Coppens K. M. Tellings A. Schreuder R., & Verhoeven L (2013). Developing a structural model of reading: The role of hearing status in reading development over time. Journal of Deaf Studies and Deaf Education, 18, 489–512. doi:10.1093/deafed/ent024 [DOI] [PubMed] [Google Scholar]
- Díaz Cintas J., & Fernández Cruz M (2008).Using subtitled video materials for foreign language instruction. In Díaz Cintas J. (Ed.), The didactics of audiovisual translation (pp. 201–214). Amsterdam, the Netherlands: John Benjamins. [Google Scholar]
- Dillon C. M. de Jong K., & Pisoni D. B (2012). Phonological awareness, reading skills, and vocabulary knowledge in children who use cochlear implants. Journal of Deaf Studies and Deaf Education, 17, 205–226. doi:10.1093/deafed/enr043 [DOI] [PMC free article] [PubMed] [Google Scholar]
- d’Ydewalle G., & de Bruycker W (2007). Eye movements of children and adults while reading television subtitles. European Psychologist, 12, 196–205. doi:10.1027/1016-9040.12.3.196 [Google Scholar]
- d’Ydewalle G. Praet C. Verfaillie K., & van Rensbergen J (1991). Watching subtitled television: Automatic reading behavior. Communication Research, 18, 650–666. doi:10.1177/009365091018005005 [Google Scholar]
- d’Ydewalle G. van Rensbergen J., & Pollet J (1987). Reading a message when the same message is available auditorily in another language. The case of subtitling. In O’Regan J. K., Lévy-Schoen A. (Eds.), Eye movements: From psychology to cognition (pp. 313–321). Amsterdam, the Netherlands: Elsevier Science Publishers. [Google Scholar]
- Easterbrooks S. R., & Stephenson B (2006). An examination of twenty literacy, science, and mathematics practices used to educate students who are deaf or hard of hearing. American Annals of the Deaf, 151, 385–397. [DOI] [PubMed] [Google Scholar]
- Fagan M. K., & Pisoni D. B (2010). Hearing experience and receptive vocabulary development in deaf children with cochlear implants. Journal of Deaf Studies and Deaf Education, 15, 149–161. doi:10.1093/deafed/enq001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Field A. (2005). Discovering statistics using SPSS (2nd Ed.). London: Sage Publications Ltd. [Google Scholar]
- Gonter Gaustad M. (2000). Morphographic analysis as a word identification strategy for deaf readers. Journal of Deaf Studies and Deaf Education, 5, 60–80. doi:10.1093/deafed/5.1.60 [DOI] [PubMed] [Google Scholar]
- Hayes H. Kessler B., & Treiman R (2011). Spelling of deaf children who use cochlear implants. Scientific Studies of Reading, 15, 522–540. doi:10.1080/10888438.2010.528480 [Google Scholar]
- Holmqvist K. Nystrom M. Andersson R. Dewhurst R. Jarodzka H., & van de Weijer J (2011). Eyetracking. A comprehensive guide to methods and measures. Oxford: Oxford University Press. [Google Scholar]
- Jensema C. J. Sharkawy S. E. Danturthi R. S. Burch R., & Hsu D (2000). Eye movement patterns of captioned television viewers. American Annals of the Deaf, 145, 275–285. doi:10.1353/aad.2012.0093 [DOI] [PubMed] [Google Scholar]
- Just M. A., & Carpenter P (1987). The psychology of reading and language comprehension. Boston, MA: Allyn & Bacon. [Google Scholar]
- Just M. A., & Carpenter P (1992) A capacity theory of comprehension: Individual differences in working memory. Psychological Review, 99, 122–149. doi:10.1037/0033-295X.99.1.122 [DOI] [PubMed] [Google Scholar]
- Karchmer M., & Mitchell R. E (2003). Demographic and achievement characteristics of deaf and hard-of-hearing students. In Marschark M., Spencer P. E. (Eds.), Oxford handbook of deaf studies, language and education (pp. 21–37). New York, NY: Oxford University Press. [Google Scholar]
- Kelly L. (2003) The importance of processing automaticity and temporary storage capacity to the differences in comprehension between skilled and less skilled college-age deaf readers. Journal of Deaf Studies and Deaf Education, 8, 230–249. doi:10.1093/deafed/eng013 [DOI] [PubMed] [Google Scholar]
- Krashen S. (1985). The input hypothesis: Issues and implications. New York, NY: Longman. [Google Scholar]
- Kruger J.-L., & Steyn F (2013). Subtitles and eye tracking: Reading and performance. Reading Research Quarterly, 49, 105–120. doi:10.1002/rrq.59 [Google Scholar]
- Kruger J.-L. Szarkowska A., & Krejtz I (2015). Subtitles on the moving image: An overview of eye tracking studies. Refractory 25 Retrieved from http://refractory.unimelb.edu.au/2015/02/07/kruger-szarkowska-krejtz/ [Google Scholar]
- Krzeszowski T. (1994). Gramatyka angielska dla Polaków [English grammar for Poles]. Warszawa, Poland: PWN. [Google Scholar]
- López-Higes R. Gallego C. Martín-Aragoneses M. T., & Melle N (2015). Morpho-syntactic reading comprehension in children with early and late cochlear implants. Journal of Deaf Studies and Deaf Education, 20, 136–146. doi:10.1093/deafed/env004 [DOI] [PubMed] [Google Scholar]
- Marschark M. (1993). Psychological development of deaf children. New York, NY: Oxford Press. [Google Scholar]
- Marschark M. Lang H. G., & Albertini J. A (2002). Educating deaf students: Research into practice. New York, NY: Oxford University Press. [Google Scholar]
- Mayberry R. I. del Giudice A. A., & Lieberman A. M (2011). Reading achievement in relation to phonological coding and awareness in deaf readers: A meta-analysis. Journal of Deaf Studies and Deaf Education, 16, 164–188. doi:10.1093/deafed/enq049 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moeller M. P. Tomblin J. B. Yoshinaga-Itano C. Connor C. M., & Jerger S (2007). Current state of knowledge: Language and literacy of children with hearing impairment. Ear and Hearing, 28, 740–753. doi:10.1097/AUD.0b013e318157f07f [DOI] [PubMed] [Google Scholar]
- Musselman C. (2000). How do children who can’t hear learn to read an alphabetic script? A review of the literature on reading and deafness. Journal of Deaf Studies and Deaf Education, 5, 9–31. doi:10.1093/deafed/5.1.9 [DOI] [PubMed] [Google Scholar]
- Nelson K. E., Camarata S. M. (1996). Improving English literacy and speech-acquisition learning conditions for children with severe to profound hearing impairments. Volta Review, 98, 17–42. [Google Scholar]
- Neuman S. B., & Koskinen P (1992). Captioned television as comprehensible input: Effects of incidental word learning from context for language minority students. Reading Research Quarterly, 27, 94–106. doi:10.2307/747835 [Google Scholar]
- Perego E. Del Missier F. Porta M., & Mosconi M (2010). The cognitive effectiveness of subtitle processing. Media Psychology, 13, 40–19. doi:10.1080/15213269.2010.502873 [Google Scholar]
- Qi S., & Mitchell R. E (2012). Large-scale academic achievement testing of deaf and hard-of-hearing students: Past, present, and future. Journal of Deaf Studies and Deaf Education, 17, 1–18. doi:10.1093/deafed/enr028 [DOI] [PubMed] [Google Scholar]
- Rayner K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124, 372–422. doi:10.1037/0033-2909.124.3.372 [DOI] [PubMed] [Google Scholar]
- Rayner K., & Duffy S. A (1988). On line comprehension processes and eye movements in reading. In Daneman M., MacKinnon G. E., Waller T. G. (Eds.), Reading research: Advances in theory and practice (pp. 13–66). New York, NY: Academic Press. [Google Scholar]
- Rayner K., & McConkie G. W (1976). What guides a reader’s eye movements? Vision Research, 16, 829–837. doi:10.1016/0042-6989(76)90143-7 [DOI] [PubMed] [Google Scholar]
- Rayner K. Pollatsek A. Ashby J., & Clifton Ch. Jr (2012). Psychology of reading (2nd ed.). New York, NY: Psychology Press. [Google Scholar]
- Romero-Fresco P. (Ed.) (2015). The reception of subtitles for the deaf and hard of hearing in Europe. Bern, Switzerland: Peter Lang. [Google Scholar]
- Roussel S. Rohr A. Raufaste E., & Nespoulous J.-L.(Forthcoming). Eye-movement analysis in reading content words and function words. Cognition. [Google Scholar]
- Schirmer B. R., & McGough S. M (2005). Teaching reading to children who are deaf: Do the conclusions of the National Reading Panel apply? Review of Educational Research, 75, 83–117. doi:10.3102/00346543075001083 [Google Scholar]
- Schotter E., & Rayner K (2012). Eye movements in reading. Implications for reading subtitles. In Perego E. (Ed.), Eye tracking in audiovisual translation (pp. 81–102). Rome, Italy: Arcane. [Google Scholar]
- Singleton J. L. Morgan D. DiGello E. Wiles J., & Rivers R (2004). Vocabulary use by low, moderate, and high ASL-proficient writers compared to hearing ESL and monolingual speakers. Journal of Deaf Studies and Deaf Education, 9, 86–103. doi:10.1093/deafed/enh011 [DOI] [PubMed] [Google Scholar]
- Stewart D. (1984). Captioned television for the deaf. British Columbia Journal of Special Education, 8, 61–69. [Google Scholar]
- Szarkowska A. Krejtz I. Kłyszejko Z., & Wieczorek A (2011). Verbatim, standard, or edited? Reading patterns of different captioning styles among deaf, hard of hearing, and hearing viewers. American Annals of the Deaf, 156, 363–378. doi:10.1353/aad.2011.0039 [DOI] [PubMed] [Google Scholar]
- Szarkowska A. Krejtz I. Pilipczuk O. Dutka Ł., & Kruger J.-L.(Forthcoming). The effects of text editing and subtitle presentation rate on the comprehension and reading patterns of interlingual and intralingual subtitles among deaf, hard of hearing and hearing viewers. Across Languages and Cultures. [Google Scholar]
- Szarkowska A., & Laskowska M (2015). Poland – a voice-over country no more? A report on an online survey on subtitling preferences among Polish hearing and hearing-impaired viewers. In Bogucki Ł., Deckert M. (Eds.), Accessing audiovisual translation (pp. 179–197). Bern, Switzerland: Peter Lang. [Google Scholar]
- Talaván Zanon N. (2006). Using subtitles to enhance foreign language education. Porta Linguarum, 6, 41–52. [Google Scholar]
- Traxler C. B. (2000). The Stanford Achievement Test, 9th Edition: National norming and performance standards for deaf and hard-of-hearing students. Journal of Deaf Studies and Deaf Education, 5, 337–348. doi:10.1093/deafed/5.4.337 [DOI] [PubMed] [Google Scholar]
- Trezek B. Wang Y., & Paul P. V (2010) Reading and deafness. Theory, research, and practice. Clifton Park, NY: Delmar, Cengage Learning. [Google Scholar]
- Trybus R. J., & Karchmer M. A (1977). School achievement scores of hearing impaired children: National data on achievement status and growth patterns. American Annals of the Deaf, 122, 62–69. [PubMed] [Google Scholar]
- van Lommel S. Laenen A., & d’Ydewalle G (2006). Foreign-grammar acquisition while watching subtitled television programmes. The British Journal of Educational Psychology, 76, 243–258. doi:10.1348/000709905X38946 [DOI] [PubMed] [Google Scholar]
- Vanderplank R. (1988). The value of teletext sub-titles in language learning. ELT Journal, 42, 272–81. doi:10.1093/elt/42.4.272 [Google Scholar]
- Vitu F. (2009). On the role of visual and oculomotor processes in reading. In Liversedge S. P., Gilchrist I. D., Everling S. (Eds.), The Oxford handbook of eye movements (pp. 731–749). Oxford: Oxford University Press. [Google Scholar]
- Waters G., & Doehring D (1990). Reading acquisition in congenitally deaf children who communicate orally: Insights from an analysis of component reading, language and memory skills. In Carr T. H., Levy B. A. (Eds.), Reading and its development: Component skills approaches (pp. 323–377). San Diego, CA: Academic Press. [Google Scholar]
- Wauters L. N. van Bon W. H. J., & Tellings A. E. J. M (2006). Reading comprehension of Dutch deaf children. Reading and Writing: An Interdisciplinary Journal, 19, 49–76. doi:10.1007/s11145-004-5894-0 [Google Scholar]
- Wolbers K. A. Dostal H. M., & Bowers L. M (2012). “I was born full deaf.” Written language outcomes after 1 year of strategic and interactive writing instruction. Journal of Deaf Studies and Deaf Education, 17, 19–38. doi:10.1093/deafed/enr018 [DOI] [PubMed] [Google Scholar]