Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Jun 1.
Published in final edited form as: Perspect ASHA Spec Interest Groups. 2020 Aug 6;5(6):1400–1409. doi: 10.1044/2020_PERSP-20-00029

The Duality of Patterning in Language and its Relationship to Reading in Children with Hearing Loss

Susan Nittrouer 1
PMCID: PMC7748246  NIHMSID: NIHMS1620634  PMID: 33344770

Abstract

Purpose:

Duality of Patterning has long been recognized as a unique design feature of human language, and refers to the distinct bi-level structure in which words comprise one level (semantic) and word-internal, phonetic elements comprise the other level (phonological). This report describes this design feature and offers a perspective on why and how it should help shape reading interventions for children with hearing loss.

Method:

Three components comprise this report. Section I offers an overview of Duality of Patterning. Section II reviews results from a longitudinal study illustrating how children with and without hearing loss acquire each level of linguistic structure, and how each level contributes to reading acquisition for each group differently. Section III of this report provides suggestions for how to incorporate this information into interventions for children with hearing loss.

Results:

Outcomes presented illustrate that semantic structure begins to take form first, with phonological structure following. Semantic structure is related to reading comprehension, and phonological structure is related to word recognition, at least for alphabetic orthographies. Children with hearing loss acquire a less differentiated linguistic system, with structure at the phonological level only partly, or coarsely acquired, and with a lack of clear distinction from the semantic level of structure. Consequently the roles of each level of structure in reading acquisition are less clearly defined for children with hearing loss.

Conclusion:

For children with normal hearing, learning to read is compartmentalized: Emerging sensitivity to phonological structure supports development of word recognition, and semantic-level skills support reading comprehension. Hearing loss diminishes language skills overall, but especially phonological sensitivity. Children with hearing loss, especially those with cochlear implants, must rely on all language skills to learn to read, including both word recognition and reading comprehension, which creates a highly inefficient processing strategy.

Introduction

The morning paper arrives on our driveway. We retrieve it, and enjoy a cup of coffee while perusing the latest news. Our ability to extract meaning from the little black symbols on those white pages seems automatic, like something we were innately disposed to do. But in fact that act of reading is a highly complex skill, developed by years of training and practice. This ability to read depends on spoken language functions that are biologically driven, and shaped by the language environment of the learner. Perceptual and cognitive functions originally designed for other purposes are recruited and modified through the reading instruction we receive as children (Huettig, Kolinsky, & Lachmann, 2018). In aggregate, the skills underlying our ability to read are ones that have emerged through millennia of human evolution and invention. But precisely because as competent readers the process of reading feels so automatic – so natural – it is easy to overlook all the skills the child must master and integrate in order to be able to understand that morning paper. What exactly are those skills and do those skills differ for the child with hearing loss? How can we best support struggling readers with hearing loss in their efforts to acquire this important function? To answer these questions, it is necessary to recognize all the skills that go into literacy acquisition, and how hearing loss may interfere with that acquisition.

This report describes how one design feature of human language provides a platform for understanding the skills that go into literacy acquisition, as well as why children with hearing loss face challenges regarding that feature. The report is divided into three sections: Section I offers details on this design feature, including its phylogeny and ontogeny. Section II provides evidence from a longitudinal study regarding how children with hearing loss are especially hindered by one component of this feature. Section III provides suggestions for how to incorporate this design feature in planning reading interventions for children with hearing loss.

I. Duality of patterning

In 1960, Hockett published what by any criterion can be considered a seminal paper. In it he listed thirteen design features of human language that he proposed make it unique among animal communication systems. This list included features such as interchangeability, which means that individuals who can perceive language can produce language, as well. This feature of interchangeability is not present in all animal communication systems. For example, in many bird species only male members engage in song. Female members of the species can perceive those songs, and are differentially attracted by the songs, but they typically do not produce songs themselves (Byers & Kroodsma, 2009; Odom, Hall, Riebel, Omland, & Langmore, 2014). Another example of an exclusive design feature of human language described by Hockett is displacement, referring to the fact that only humans are able to talk about objects or events that are not currently present in time or in place. Members of other species may communicate – for example, warning members of their group about an approaching predator – but they are restricted to the here and now. Humans are able to discuss past and future events, or even topics as abstract as feelings and future plans. Although all thirteen features described by Hockett continue to spark research, the duality of patterning is the one that has received the most attention. Even now, in the 21st century, psycholinguists are examining this special property (de Boer, Sandler, & Kirby, 2012; Wacewicz & Zywiczynski, 2015). In light of its continued relevance, it is useful to explore the concept of duality of patterning can help define the process of literacy acquisition, and what can disrupt this process for learners with hearing loss.

Duality of patterning in human language describes two levels of structure that are present in all languages, if a language has existed for several generations or more. At one level, meaningful elements (words, or morphemes) can be combined according to language-specific rules to convey information regarding the relationships among those meaningful elements. At another level, word-internal meaningless elements (most importantly, phonemes) can be sequenced, again according to language-specific rules, to generate words.

A question that has figured prominently in the research into the duality of patterning is how language came to possess this unique feature. Unfortunately, the evolution of human language is inherently difficult to investigate, because there is no physical evidence of how it transpired. Whereas it is possible to trace the emergence of modern human anatomy by examining fossils, no such record exists for human language. Consequently, a variety of specific accounts have been proposed regarding exactly how this evolutionary process unfolded, with few methods for deciding among these proposed accounts. The broad hypothesis, which is generally agreed upon by proponents of all accounts, is that words emerged first from calls among early humans (e.g., Falk, 2004). As more of these early words were incorporated into the lexicon pressure arose to differentiate those items according to their internal components. Those smaller elements could then be recombined in novel sequences to generate new words (Lieberman, 2015; Studdert-Kennedy, 1998). Thus there is a proposed temporal offset in the evolution of these two structural levels, with the semantic level emerging first and the phonological level appearing later. Of course it is difficult to test that proposal. Certainly most spoken languages currently existing in the world are old enough to have developed both levels of structure. Fortunately, however, a newly emerging sign language offers some evidence. This language has emerged in Israel since the founding of the country around the middle of the twentieth century. With the new nation came new citizens, and new communities. As one such community was being settled, several deaf children were born into a single family. When those siblings grew and married, the numbers of deaf people in this small community increased, and a novel sign language began to appear. As recently as the early 21st century, it was apparent that although this language possesses a substantial lexicon and follows clear syntactic rules, it lacks the phonological level of structure evident in more mature sign languages (Sandler, Aronoff, Meir, & Padden, 2011). Presumably that structure has yet to emerge.

Aside from the temporal offset in emergence of these two levels of structure, there is one other important difference between them. Words are well marked in the acoustic waveform by a rise and fall in amplitude, especially single-syllable words which likely most early words were. Phonetic segments, on the other hand, are highly encoded, with no discrete acoustic pattern discernable for each segment. Instead, the acoustic structure that is affiliated with any one phonetic segment overlaps with that of one or more other segments. And the acoustic structure defining any given segment varies, depending on the place it occupies within the syllable and the identity of surrounding segments. For example, the alveolar, voiced stop [d] is associated with very different acoustic cues to its identity if it is in the coda position, rather than the syllable-initial position. Similarly, the formant frequencies of any given vowel differ depending on the consonant context in which that vowel is located. These traits of a lack of discreteness and a lack of invariance in the physical structure of phonetic segments help to explain why the act of reading – at least with alphabetic orthographies – is not so straightforward. The writing systems of various languages differ with respect to how transparently the orthography lines up with the individual phonetic segments of that language, but in no language are phonetic segments discretely represented with invariant physical structure in the acoustic signal. Thus the first step in learning letter-sound correspondences is more difficult than it might seem to the casual observer, because the sounds involved are not isolable in the speech stream.

When it comes to development, the emergence of the two levels of language structure described by Hockett (1960) replicates what is seen in evolutionary accounts. A model of development known as lexical restructuring captures especially well the quasi-independence and the mismatched temporal timetables for emergence of these two structural levels. According to this model, children’s initial lexicons consist of words that are represented as coarse, or “global” structures, meaning they are largely unanalyzed with respect to internal phonological units (Jusczyk, 1992; 1993; Menn, 1983; Menyuk & Menn, 1979; Nittrouer, 2006). Thus, the young child is not tasked with disentangling the highly encoded organization of word-internal phonetic structure before being able to use those words to signal wants and thoughts. These lexical representations match those proposed as first words in accounts of language evolution. And a temporal offset can be found in the ontogeny of this bi-level structure, as well. It is only sometime after the emergence of those first words that children begin to discover word-internal phonological structure (including phonetic), leading both to a reorganization of the existing lexicon and entry of new items into the lexicon using phonetic structure (Charles-Luce & Luce, 1995; Metsala, 1997; Storkel, 2002; Walley, 1993; Walley, Metsala, & Garlock, 2003). The acquisition of refined phonological representations requires most of childhood to complete, as does the ability to utilize those representations in other language functions, such as verbal working memory, novel word learning, and of course, reading.

The invention of orthography

In contrast to the biologically driven nature of evolution for spoken (or signed) languages, written language – or orthographies – are believed to have been deliberately invented by humans. The timing of this invention is much more recent, as well. Estimates of the first appearances of spoken languages vary greatly depending on what is considered language. Generally, however, these estimates put this appearance as being at least 50,000 years ago, with some estimates placing it as long ago as 100,000 years (Bolhuis et al., 2014). In contrast, the first writing systems are thought to have appeared only 5,000 years ago (Gelb, 1963), and not all societies that ever had their own spoken language developed written language. In fact, it was relatively uncommon for a spoken language to have a written version.

Early orthographies took a form closely related to pictographies or logographies, where each symbol represented a word (Gelb, 1963). Of these early systems, the Chinese orthography has continued to be used, largely unaltered from its original form. Two thousand years after the appearance of those early pictographies, the first alphabetic orthography appeared. Unlike the earlier orthographies, this one was invented only once, in a part of the world that was then known as Phoenicia (Gelb, 1963; Sims, 1982). Thus, alphabetic orthographies are the more recent invention, and the least common if one takes a broad historical view of reading. Nonetheless, most research on reading and related disorders examines functioning with alphabetic orthographies.

The duality of patterning affects how we read

There are differences in how skills related to the semantic and phonological levels of spoken language are related to skill in reading pictographies and alphabets. In an especially pertinent study, Zhou et al. (2018) investigated the language skills associated with learning to read Chinese versus English for 7- to 8-year-old children who were either first-language learners (L1) or second-language learners (L2) of Chinese. These investigators were particularly well positioned to conduct this work, because it was done in Hong Kong. This territory had been colonized by the British in the mid-19th century, and English was the official language. That meant that all children learned to read an English orthography. When the territory reverted back to Chinese control at the end of the twentieth century, there were many citizens who were not fluent in Chinese. That meant there were many English-speaking families whose children needed to learn to speak and read Chinese, but the parents did not know it. These are the children who formed the L2 group in the Zhou et al. study. All children were learning to read English, regardless of whether they came from a monolingual English home or a bilingual Chinese-English home environment.

The authors administered two tests of word recognition: one for Chinese characters and one for English words. Children in both groups were also given tests of morphological and phonological awareness for Chinese, along with several other tasks. The test of morphological awareness consisted of children being asked to create new compound words from two previously acquired morphemes. This task reflects the fact that in Chinese, unlike English, morphological structure largely involves compounding words, rather than adding inflectional morphemes to base words. The test of phonological awareness involved elision, with children being asked to delete either a syllable or an initial segment. When it came to reading Chinese characters, results showed no correlation between reading and phonological awareness for L1 learners (r = −.07), but some correlation between reading and morphological awareness for these same learners (r = .32). The opposite pattern was found for L2 learners, with a strong correlation observed for Chinese reading and phonological awareness (r = .54), but not for morphological awareness (r = .08). When it came to reading English, both groups showed some correlation between reading ability and phonological awareness, although it was weaker for the L1 than the L2 learners (r = 0.35 versus r = 0.52, respectively). Correlation coefficients were even weaker for English word reading with morphological awareness, for both the L1 and L2 learners (r = 0.02 versus r = 0.16, respectively). This experiment strongly suggests that sensitivity to each of the two levels of language structure described by duality of patterning is differently related to learning to read each kind of orthography. In Chinese, where the characters are more closely related to the semantic properties of the words than the phonological structure, the L1 children attended to those semantic properties. This approach was apparently not a strategy that the L2 children had acquired, so instead continued using the strategies they had developed for recognizing English words. In English, the orthographic symbols are directly related to phonological structure, and both groups of children were able to attend to that structure for recognizing those words to some extent.

Outcomes of this experiment reflect a critical consequence of how the brain evolved to support human language. Two quasi-independent neural networks appear to have emerged, with one more responsible for processing meaning and the other more responsible for processing phonological structure (Poldrack et al., 1999). The primary evidence for this claim is provided by results of functional neuroimaging studies of the brain during reading. Of course, depending on what we are reading, other networks can become involved. We may need to perform cognitive operations to comprehend what we are reading, or the material may spark emotions. The networks excited in response to these factors can obscure responses strictly associated with processing the written text. Thus, in order to maintain the focus on reading most of these imaging studies have restricted the materials used to single words (McDermott, Petersen, Watson, & Ojemann, 2003; Price, Moore, Humphreys, & Wise, 1997). In such studies, participants are commonly asked to attend to either the meaning or the phonological structure of the sets of words being presented. Outcomes of these studies indicate that these two attributes of written material are processed separately, with differing neural structures involved. These are neural structures that are utilized for other purposes from birth. The child must learn to repurpose them during the act of reading (Huettig et al., 2018).

Challenges with phonological structure

Hearing loss imposes functional deficits beyond just a loss of audibility. Recent advances in auditory prostheses have improved options for treating the reduced audibility that is the primary symptom of hearing loss. Hearing aids or cochlear implants can generally ensure that gain is sufficient to restore aided thresholds to near-normal levels. Nonetheless, problems remain. The representation of both temporal and spectral structure is degraded as a result of hearing loss, and clear representations of that structure are needed to disentangle the highly encoded nature of phonological (especially phonetic) structure. Children with hearing loss should be able to develop at least a serviceable lexicon, meaning one consisting of common words, based on the less-detailed acoustic structure available to them. Instead the degraded signals imposed by hearing loss should disproportionately affect a child’s ability to acquire skill at that other level of language structure, namely the phonological level. And that is precisely what we have found in a longitudinal study of children with and without hearing loss.

II. Evidence from a longitudinal study

In this section some results from that longitudinal study are reviewed. First, participants are described. Next trends are illustrated for acquisition across childhood of both levels of linguistic structure. Finally, how sensitivity to these levels of structure promote reading acquisition for children with normal hearing and hearing loss is explained. These outcomes are offered here in support of the main premise of this report, that the duality of patterning is a concept pertinent to the understanding of language and literacy learning by deaf children. Details of procedures from these experiments can be found in the full research reports, especially Nittrouer (2010) and Nittrouer and Caldwell-Tarr (2016).

Participants

The children in our study were infants or toddlers when they began participating. All children had nonverbal intelligence above a standard score of 70, and no child had any condition other than hearing loss that could disrupt the development of language on its own. All children received intervention in spoken language, and their parents reported wanting their children to be sufficiently proficient in spoken language to be able to attend mainstream elementary school without the aid of a sign-language interpreter. Between one and four years of age, these children were tested every six months, within one month of their six-month birthdays. Once they entered kindergarten, the children were tested every second year, in the summer after even-numbered grade levels. During these school years, 122 children from the original longitudinal study were tested: 49 with normal hearing, 19 with moderate hearing loss who used hearing aids, and 54 with moderate-to-profound hearing loss who had cochlear implants. These are the children for whom data are reported here.

Developmental trends across childhood

In order to illustrate the emergence of the semantic level of structure in the language of these children, vocabulary development is traced across school age. The Expressive One-Word Picture Vocabulary Test (Brownell, 2000) was administered across all test ages from kindergarten to eighth grade, and Figure 1 shows the mean numbers of vocabulary items children in each group could name at each grade, along with standard errors of the means. Raw scores are shown, so that development can be observed. This figure reveals that children with both hearing aids and cochlear implants performed more poorly than their peers with normal hearing, but the discrepancies decrease slightly over childhood. The first row of Table 1 shows Cohen’s ds for these vocabulary scores at kindergarten and eighth grade. Cohen’s d can be interpreted as standard scores. Thus a Cohen’s d of 1.00 means that groups differed by one standard deviation, which would place mean performance for one group at the 16% percentile of mean performance for the other group. These Cohen’s ds are shown for children with normal hearing versus children with cochlear implants and for children with normal hearing versus those with hearing aids. They indicate that across childhood children with hearing loss improved in their knowledge of the semantic level of language structure relative to the children with normal hearing.

FIGURE 1:

FIGURE 1:

Growth curves for raw expressive vocabulary measures showing group means and standard errors of the mean.

Table 1.

Cohen’s ds comparing performance of children with NH and those with CIs or HAs at two time points.

NH versus CIs NH versus HAs
kindergarten 8th grade kindergarten 8th grade
Vocabulary 1.36 0.65 0.87 0.27
Phonological Awareness 2.43 1.23 1.53 0.81

Figure 2 shows growth curves for a test of phonological awareness that was administered across school age. This task was a Final Consonant Choice task. It consisted of 48 items and was administered in an audio-visual format on a computer monitor to ensure that children with hearing loss had ample opportunity to recognize all words. In this task the child was presented with a target word, and had to repeat it, a procedure that served as an additional check on children’s word recognition. After repeating the target word, the child was presented with three choices, and had to select the one that ended in the same sound as the target. It is apparent from Figure 2 that children with hearing loss performed more poorly on this task, relative to children with normal hearing, than they did on the vocabulary measure. Cohen’s ds for this task are shown in the second row of Table 1. Children with hearing loss improved in their sensitivity to phonological structure, but never acquired the sensitivity even close to that of children with normal hearing. In fact, inspection of Figure 2 reveals that children with cochlear implants at eighth grade only performed about as well as children with normal hearing at second grade. These outcomes support the conclusion that children with hearing loss encounter substantially more difficulty acquiring sensitivity to the phonological level of language than to the lexical, or semantic level.

FIGURE 2:

FIGURE 2:

Growth curves for the phonological awareness task showing group means and standard errors of the mean.

Duality of patterning and literacy acquisition

Our longitudinal study was conducted in the United States, so we were interested in how well the children were learning to read an alphabetic orthography. For this report, outcomes related to literacy and related language skills are restricted to data from our longitudinal study collected at the end of second grade, because this test age matches that of Zhou et al. (2018). In order to explore how well various language skills predict reading ability at second grade, we selected three measures related to each of the semantic and phonological levels of language structure. Two additional measures were also included: verbal working memory and processing speed. These functions are often considered critical to successful literacy acquisition (Sesma, Mahone, Levine, Eason, & Cutting, 2009; Wolf & Bowers, 1993).

Two measures of reading ability were obtained, and both came from the Qualitative Reading Inventory (Leslie & Caldwell, 2006). In this task, the child is asked to read passages. We selected three passages to administer at second grade: two narratives and one expository. The percentage of words read correctly across the three passages is reported here, which is similar to the measure of word recognition obtained by Zhou et al. (2018), although words were presented in isolation in that study. In addition, children were asked to answer ten comprehension questions per passage. The number of questions answered correctly (out of 30) was used in this report as a way of indexing reading comprehension, or children’s abilities to get meaning from text.

Three measures of phonological sensitivity were obtained. The first two assessed awareness and the third assessed processing. These tasks were: (1) the Final Consonant Choice task already described; (2) an Initial Consonant Choice task that is administered the same way as the Final Consonant Choice task; and (3) a Phoneme Deletion task in which children repeated a nonword, and then were asked to say the word created by removing one segment (e.g., Say [skɛlf] without the [k] sound). The position of the segment to be deleted varied, as did whether it was part of a cluster or not (Nittrouer, Caldwell-Tarr, Sansom, Twersky, & Lowenstein, 2014). The percent correct answers were used as dependent measures for all.

Two additional measures were obtained: (1) verbal working memory, assessed with a serial recall task involving consonant-vowel-consonant nouns (Nittrouer, Caldwell-Tarr, & Lowenstein, 2013); and (2) rapid color naming from the Comprehensive Test of Phonological Processing (Wagner, Torgesen, & Rachotte, 1999). For serial recall, word lists were six words long, and ten trials were administered. The percentage of words recalled in the correct position across the ten trials was used as the dependent measure. For rapid color naming, the mean time in seconds required to complete the task across two trials served as the dependent measure.

Three measures of knowledge and skill with semantic structure were collected: (1) the expressive vocabulary measure already described; (2) a measure of auditory comprehension of language using the Paragraph Comprehension subtest of the Comprehensive Assessment of Spoken Language (Carrow-Woolfolk, 1999); and (3) a measure of the child’s ability to construct an oral narrative. Detailed descriptions of how these narratives were collected are available elsewhere (Nittrouer, Lowenstein, & Holloman, 2016; Nittrouer, Muir, Tietgens, Moberly, & Lowenstein, 2018). The number of points obtained according to a 36-point scoring rubric was used as the dependent measure for the narrative task. Standard scores were used for the first two measures. All three tasks assess children’s abilities at deriving meaning from language, which represents the semantic level of structure according to the duality of patterning.

Table 2 shows mean scores and standard deviations for these eight measures, and Table 3 shows statistical outcomes. Outcomes shown on these tables reflect what was observed in the Figures 1 and 2: the greatest effects of hearing loss are seen for the measures of phonological awareness. The effect size for the Phoneme Deletion task is the smallest of the three tasks. This replicates trends we have seen previously for tests of phonological awareness versus processing (Nittrouer et al., 2016; Nittrouer et al., 2018). Children with hearing loss are capable of manipulating phonological structure, if they can recognize it. Consequently, performance on measures such as this Phoneme Deletion task is explained in some part by those processing abilities. These tables also reveal that children with hearing loss performed the most like children with normal hearing on the two additional measures of verbal working memory and processing speed, as well as auditory comprehension of language.

Table 2.

Mean scores (and standard deviations) for measures.

Group
NH HA Cl
M SD M SD M SD
Reading
 Word Recognition (% correct) 96 3 94 4 91 7
 Comprehension (out of 30 questions) 21 3 18 5 17 6
Phonological Structure
 Initial Consonant Choice (% correct) 87 13 80 14 64 26
 Final Consonant Choice (% correct) 70 18 53 26 36 26
 Phoneme Deletion (% correct) 71 22 56 30 48 32
Additional Measures
 Serial Recall (% correct) 56 17 56 19 44 15
 Rapid Naming (seconds) 76 20 81 26 90 30
Semantic Structure
 Expressive Vocabulary (standard score) 110 14 101 22 95 18
 Auditory Comprehension (standard score) 112 12 106 21 100 20
 Narrative Score (out of 36 points) 26 4 23 6 20 7

Table 3.

Results of one-way ANOVAs for each measure, with post hoc comparisons.

F p η2 NH v HA NH v Cl HA v CI
Reading
 Word Recognition (% correct) 10.14 <.001 .146 NS <.001* NS
 Comprehension (out of 30 questions) 9.18 <.001 .134 NS <.001* NS
Phonological Structure
 Initial Consonant Choice (% correct) 18.60 <.001 .240 NS <.001* .010*
 Final Consonant Choice (% correct) 27.17 <.001 .315 .018* <.001* .028*
 Phoneme Deletion (% correct) 8.79 <.001 .131 NS <.001* NS
Additional Measures
 Serial Recall (% correct) 8.71 <.001 .128 NS .001* .013
 Rapid Naming (seconds) 4.37 .015 .068 NS .012* NS
Semantic Structure
 Expressive Vocabulary (SS) 10.12 <.001 .145 NS <.001* NS
 Auditory Comprehension (SS) 6.13 .003 .093 NS* .002* NS
 Narrative Score (out of 36 points) 13.83 <.001 .189 NS <.001* NS
* =

contrast is significant at the .05 level with Bonferroni correction; NS = not significant (p > .10)

Table 4 represents how each of these eight measures is related to reading, for word recognition and reading comprehension separately. These values are shown for each group separately. Children with hearing aids formed a much smaller group than either that of children with normal hearing or children with cochlear implants. As a result, correlation coefficients obtained for children with hearing aids did not necessarily reach statistical significance, even if similar coefficients were significant for children with normal hearing or with cochlear implants.

Table 4.

Pearson product-moment correlation coefficients between word recognition or reading comprehension scores and each of eight other measures.

Phonological Structure Additional Semantic Structure
Initial Consonant Final Consonant Phoneme Deletion Serial Recall Rapid Naming Exp. Vocab. Aud. Comp. Narrative Score
Word Recognition
 NH .55*** .40** .51*** .14 −.21 .16 .14 .01
 HA .41 .62** .49* .39 −.13 .62** .53* .54*
 Cl .60*** .38** .67*** .43** −.21 .59*** .41** .58***
Comprehension
 NH .18 .07 .20 .26 −.04 .48** .56*** .16
 HA .63** .62** .58** .36 −.28 .58* .88*** .81***
 Cl .62*** .35* .61*** .30* −.21 .70*** .75*** .69***
*

p < .05;

**

p < .01;

***

p < .001

The coefficients for children with normal hearing show that word recognition was significantly correlated only with the measures of phonological sensitivity, replicating what Zhou et al. (2018) had reported for children learning to read the English orthography. For children with hearing loss, word recognition was also associated with the measures of phonological sensitivity, indicating that this sensitivity was associated with decoding, even though these children had quite poor sensitivity to phonological structure. But other correlation coefficients were also significant for these children with hearing loss. In particular, word recognition was significantly correlated with semantic skills, indicating that these children were using top-down context effects to a considerable extent. Working memory, as measured by the serial recall task, was also related to word recognition for these children. Overall it is fair to conclude that recognizing written words is a more difficult task for children with hearing loss, requiring them to make use of all their language skills and allocate greater cognitive resources. This difference in strategy likely arises because the children with hearing loss simply do not have sufficient sensitivity to phonological structure to support word recognition through this means alone. If we return to Figure 2, it is apparent that in second grade their sensitivity to phonological structure was quite poor, compared to that of the children with normal hearing.

For reading comprehension, the correlation coefficients for children with normal hearing indicate that these scores were explained largely by their semantic language skills, as these are the only correlation coefficients to be significant. Thus there is a sharp contrast in outcomes for these correlation coefficients and those obtained for word recognition for children with normal hearing, suggesting that they utilize phonological sensitivity for word recognition, but use semantic knowledge for comprehension. For children with hearing loss, however, the outcomes of the analyses for reading comprehension resemble those for word recognition: These children appear to need to bring all their language skills to bear on the task of reading, even though those language skills are not as keen as those of children with normal hearing.

Interpreting these results

In Section I, two examples were given of how readers alternatively attend to semantic or phonological structure, depending on which is most relevant for the task at hand. Children who are growing up bilingual with Chinese and English learn to attend primarily to semantic structure in reading Chinese characters, but can attend to phonological structure in reading English text (Zhou et al., 2018). And adult, native speakers of English recruit different neural networks when reading, depending on whether the task at hand calls for attention to the semantic or the phonological properties of the words being read (McDermott et al., 2003; Price et al., 1997). The children with normal hearing whose data are described above had clearly developed this rather strong independence in processing strategies for different components of the reading task. For word recognition they appear to have attended primarily to phonological structure, judging by the strong correlations of word recognition scores with scores for phonological awareness and processing. Reading comprehension, on the other hand, was more strongly associated with their skills processing semantic-level structure. In contrast, this bifurcation in strategies for different components of reading had clearly not emerged for the children with hearing loss by second grade. Instead they appear to accomplish both components of reading with a singular approach, bringing both phonological and semantic processing to the task. This is surely a more cumbersome and less efficient strategy.

III. Applying these lessons to reading intervention for children with hearing loss

The perspective provided in Sections I and II describes a design feature of human language that is difficult for children with hearing loss to acquire. The duality of patterning in human language refers to its bi-level structure, encompassing one level of meaning (semantic) and one level of fine-grained elements (phonological). Semantic structure begins to emerge in children’s language earlier than phonological structure, and it is with this latter structure that children with hearing loss encounter the most difficulty. Phonological structure typically does not reach mature status until adolescence, so children with hearing loss are in mainstream academic environments during the period in which they should be acquiring it. That means the challenges they are facing can be easily overlooked by clinicians and teachers unfamiliar with this process. But it is critical that clinicians and teachers keep in mind the principles of duality of patterning in designing interventions for these children. This section describes ways that can be done most effectively.

Language intervention across childhood

The outcomes reported here demonstrate that even children with hearing loss who receive early intervention lag behind their peers with normal hearing upon entering elementary school, in both semantic and phonologically based skills. Furthermore, the lexical restructuring model of language acquisition suggests that sensitivity to phonological structure in the acoustic speech signal continues to be honed through the elementary grades, even for children with normal hearing. For children with hearing loss who have access to only impoverished speech signals, activities designed to help them learn to recognize phonological structure are essential. Such activities should be started during the preschool years, and continued through elementary school. Of course, there is a typical developmental sequence for the various levels of phonological structure (Stanovich, Cunningham, & Cramer, 1984), and that sequence should determine which levels of structure are targeted at each age. During the preschool years, activities should target phonological structure at the level of the syllable, including rhyme. During elementary school, the focus should shift to phonemic structure and include both awareness and processing of that structure.

In addition, there are many syntactic structures that are only acquired after children typically enter elementary school. According to Hockett’s (1960) account of duality of patterning syntax falls within the realm of semantic structure because it conveys meaning about the relationships among individual words. Late-acquired semantic knowledge includes understanding subordinate clauses and differences in meaning across sentences that share the same surface form (e.g., Mickey tells Donald to go to the store versus Mickey promises Donald to go to the store.) These more complex syntactic structures need to be targeted in intervention for children with hearing loss during the school years. Furthermore, explicit instruction in morphological forms can benefit children’s literacy acquisition (Apel & Werfel, 2014), so should be incorporated into intervention plans.

Overall, intervention for school-age children with hearing loss should target skills at both the semantic and phonological levels of structure. This instruction should be deliberately planned, so that lessons across these levels of structure can be interleaved.

Use whole language structures in spoken language

Using complete language structures, mostly whole sentences, during instruction and intervention activities facilities the acquisition of sensitivity to both the semantic and phonological levels of language. At the semantic level, children learn how to combine words to communicate information about the relationships among those words by learning how to act on classes of words, as opposed to individual words. As children are learning early words, these words form natural categories. Young children may not know the terms noun or verb, but they nonetheless figure out how different classes of words can be used in word sequences: for example, words about things and people go in one position, and words about actions go in another position (Mintz, 2003). This knowledge about the structural properties of words is as important as the knowledge about their meanings, and children acquire this knowledge from the syntactic frames in which words occur.

Using whole language structures is also critical to a child’s development of sensitivity to phonological structure. As described earlier, the acoustic correlates of phonemic categories vary depending on the place that phonemic segment occupies in a word. Linguistic factors such as syllable stress and rate of speech can further affect these acoustic properties. Children need experience hearing long stretches of speech to begin recognizing the various physical forms each phonemic segment can exhibit.

Provide access to the visual speech signal

Studdert-Kennedy (1987) described the phoneme as a perceptuomotor unit, meaning that the articulatory gestures that generate phonological structure in the speech signal are just as fundamental to phonemic representations as are the acoustic correlates. The visual signal provides substantial access to those articulatory gestures, allowing a child to develop stronger representations of each phonemic category. It is critical that children with hearing loss see, as well as hear the speaker.

Children learn to understand speech by producing speech

Children acquire sensitivity to phonological structure as they learn to produce speech. It is not the case that children develop phonological representations, and then learn to produce those representations. Instead it is the case that by mastering the articulatory gestures and coordination among those gestures needed to impart phonological structure to the speech they generate, children learn to recognize that structure in the speech they hear. In fact, in a recent analysis involving the children in the longitudinal study reviewed in Section II, we found that how well the children with cochlear implants could produce speech at preschool was the strongest predictor of both semantic and phonological knowledge at fourth grade (Nittrouer, Lowenstein, & Antonelli, 2020).

Reading instruction promotes language development

Although we typically view the sorts of semantic and phonological skills reviewed here as being foundational to reading acquisition, we have observed that the relationship is especially bidirectional for children with cochlear implants (Nittrouer et al., 2018). Using cross-lagged analysis we were able to demonstrate that earlier reading acquisition contributed to these children’s phonological awareness and processing abilities at later ages, rather than the other way around. Similarly it was found that reading instruction can effectively bolster children’s vocabulary and syntactic knowledge and skills. Therefore, reading instruction should be started as early as possible with children with hearing loss, and should be as frequent as possible. This instruction should focus on structure at both levels: semantic and phonological.

Summary

This report has presented one perspective on literacy acquisition by children with hearing loss that is based on a unique design feature of human language. Duality of patterning refers to the feature of human language in which there exist two distinct levels of structure: semantic and phonological. In both evolution and development it appears that phonological structure emerged (or emerges) later than semantic structure and relies more heavily on having keen sensitivity to spectrotemporal details in the acoustic speech signal. In reading, different neural networks have evolved to handle the processing of each of these levels of structure in a quasi-independent manner, lending efficiency to the task. Typically developing children with normal hearing appear to acquire delineation in these network operations by second grade. Children with hearing loss encounter greater difficulty acquiring phonological than semantic structure in their linguistic representations, due to the degraded signals available to them. Because phonological structure emerges later than semantic structure this problem can go largely unnoticed by clinicians and teachers in mainstream academic settings. Nonetheless it inhibits the bifurcation in processing for semantic and phonological structure during reading observed for children with normal hearing, leading to inefficiencies in operation. Teachers and clinicians who work with these children need to plan interventions to promote development of these distinct processing strategies by alternately focusing attention on aspects of semantic (including morphological and syntactic) and phonological structure.

Acknowledgment

The help of Joanna Lowenstein in final manuscript preparation is gratefully acknowledged.

Funding: This work was supported by Grants No. R01 DC006237 and R01 DC015992 from the National Institute on Deafness and Other Communication Disorders, the National Institutes of Health.

Footnotes

Conflict of Interest: The author has no conflicts of interest to report.

References

  1. Apel K & Werfel K (2014). Using morphological awareness instruction to improve written language skills. Language, Speech, and Hearing Services in the Schools, 45, 251–260. [DOI] [PubMed] [Google Scholar]
  2. Bolhuis JJ, Tattersall I, Chomsky N, & Berwick RC (2014). How could language have evolved? PLOS Biology, 12, e1001934. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Brownell R (2000). Expressive One-Word Picture Vocabulary Test (EOWPVT) (3rd ed.). Novato, CA: Academic Therapy Publications, Inc. [Google Scholar]
  4. Byers BE, & Kroodsma DE (2009). Female mate choice and songbird song repertoires. Animal Behaviour, 77, 13–22. [Google Scholar]
  5. Carrow-Woolfolk E (1999). Comprehensive Assessment of Spoken Language (CASL). Bloomington, MN: Pearson Assessments. [Google Scholar]
  6. Charles-Luce J, & Luce PA (1995). An examination of similarity neighbourhoods in young children’s receptive vocabularies. Journal of Child Language, 22, 727–735. [DOI] [PubMed] [Google Scholar]
  7. de Boer B, Sandler W, & Kirby S (2012). New perspectives on duality of patterning: Introduction to the special issue. Language and Cognition, 4, 251–259. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Falk D (2004). Prelinguistic evolution in early hominins: Whence motherese? Behavioral and Brain Sciences, 27, 491–503. [DOI] [PubMed] [Google Scholar]
  9. Gelb IJ (1963). A study of writing. Chicago: University of Chicago Press. [Google Scholar]
  10. Hockett CF (1960). The origin of speech. Scientific American, 203, 89–96. [PubMed] [Google Scholar]
  11. Huettig F, Kolinsky R, & Lachmann T (2018). The culturally co-opted brain: how literacy affects the human mind. Language, Cognition and Neuroscience, 33, 275–277. [Google Scholar]
  12. Jusczyk PW (1992). Developing phonological categories from the speech signal In Ferguson CA, Menn L, & Stoel-Gammon C (Eds.), Models, research, and implications (pp. 17–64). Parkton, MD: York Press. [Google Scholar]
  13. Jusczyk PW (1993). From general to language-specific capacities: The WRAPSA model of how speech perception develops. Journal of Phonetics, 21, 3–28. [Google Scholar]
  14. Leslie L, & Caldwell J (2006). Qualitative Reading Inventory - 4. New York: Pearson. [Google Scholar]
  15. Lieberman P (2015). Language did not spring forth 100,000 years ago. PLoS.Biology, 13, e1002064. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. McDermott KB, Petersen SE, Watson JM, & Ojemann JG (2003). A procedure for identifying regions preferentially activated by attention to semantic and phonological relations using functional magnetic resonance imaging. Neuropsychologia, 41, 293–303. [DOI] [PubMed] [Google Scholar]
  17. Menn L (1983). Development of articulatory, phonetic and phonological capabilities In Butterworth B (Ed.), Language production: Development, writing and other language processes (pp. 3–50). New York: Academic Press. [Google Scholar]
  18. Menyuk P, & Menn L (1979). Early strategies for the perception and production of words and sounds In Fletcher P & Garman M (Eds.), Language acquisition (pp. 49–70). Cambridge, MA: Cambridge University Press. [Google Scholar]
  19. Metsala JL (1997). An examination of word frequency and neighborhood density in the development of spoken-word recognition. Memory & Cognition, 25, 47–56. [DOI] [PubMed] [Google Scholar]
  20. Mintz TH (2003). Frequent frames as a cue for grammatical categories in child directed speech. Cognition, 90, 91–117. [DOI] [PubMed] [Google Scholar]
  21. Nittrouer S (2006). Children hear the forest. Journal of the Acoustical Society of America, 120, 1799–1802. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Nittrouer S (2010). Early development of children with hearing loss. San Diego, CA: Plural Publishing. [Google Scholar]
  23. Nittrouer S, & Caldwell-Tarr A (2016). Language and literacy skills in children with cochlear implants: Past and present findings In Young N & Kirk KI (Eds.), Pediatric Cochlear Implantation: Learning and the Brain (pp. 177–197). New York: Springer. [Google Scholar]
  24. Nittrouer S, Caldwell-Tarr A, & Lowenstein JH (2013). Working memory in children with cochlear implants: problems are in storage, not processing. International Journal of Pediatric Otorhinolaryngology, 77, 1886–1898. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Nittrouer S, Caldwell-Tarr A, Sansom E Twersky J, & Lowenstein JH (2014). Nonword repetition in children with cochlear implants: A potential clinical mearker of poor language acquisition. American Journal of Speech-Language Pathology, 23, 679–695. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Nittrouer S, Lowenstein JH, & Antonelli J (2020). Parental Language Input to Children With Hearing Loss: Does It Matter in the End? Journal of Speech Language and Hearing Research, 63, 234–258. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Nittrouer S, Lowenstein JH, & Holloman C (2016). Early predictors of phonological and morphosyntactic skills in second graders with cochlear implants. Research in Developmental Disabilities, 55, 143–160. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Nittrouer S, Muir M, Tietgens K, Moberly AC, & Lowenstein JH (2018). Development of phonological, lexical, and syntactic abilities in children with cochlear implants across the elementary grades. Journal of Speech, Language, and Hearing Research, 61, 2561–2577. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Odom KJ, Hall ML, Riebel K, Omland KE, & Langmore NE (2014). Female song is widespread and ancestral in songbirds. Nature Communications, 5, 3379. [DOI] [PubMed] [Google Scholar]
  30. Poldrack RA, Wagner AD, Prull MW, Desmond JE, Glover GH, & Gabrieli JDE (1999). Functional specialization for semantic and phonological processing in the left inferior prefrontal cortex. NeuroImage, 10, 15–35. [DOI] [PubMed] [Google Scholar]
  31. Price CJ, Moore CJ, Humphreys GW, & Wise RJ (1997). Segregating Semantic from Phonological Processes during Reading. Journal of Cognitive Neuroscience, 9, 727–733. [DOI] [PubMed] [Google Scholar]
  32. Sandler W, Aronoff M, Meir I, & Padden C (2011). The gradual emergence of phonological form in a new language. Natural Language and Linguistic Theory, 29, 503–543. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Sesma HW, Mahone EM, Levine T, Eason SH, & Cutting LE (2009). The contribution of executive skills to reading comprehension. Child Neuropsychology, 15, 232–246. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Sims HC (1982). History of writing and its relevance in dyslexia. Journal of the Royal Society of Medicine, 75, 918. [PMC free article] [PubMed] [Google Scholar]
  35. Stanovich KE, Cunningham AE, & Cramer BB (1984). Assessing phonological awareness in kindergarten children: Issues of task comparability. Journal of Experimental Child Pscychology, 38, 175–190. [Google Scholar]
  36. Storkel HL (2002). Restructuring of similarity neighbourhoods in the developing mental lexicon. Journal of Child Language, 29, 251–274. [DOI] [PubMed] [Google Scholar]
  37. Studdert-Kennedy M (1987). The phoneme as a perceptuomotor structure In Allport A, MacKay DG, Prinz W, & Scheerer E (Eds.), Language perception and production: Relationships between listening, speaking, reading, and writing (pp. 67–84). Orlando: Academic Press. [Google Scholar]
  38. Studdert-Kennedy M (1998). Introduction: The emergence of phonology In Hurford JR, Studdert-Kennedy M, & Knight C (Eds.), Approaches to the evolution of language: Social and cognitive bases (pp. 169–176). Cambridge, U.K.: Cambridge University Press. [Google Scholar]
  39. Wacewicz S, & Zywiczynski P (2015). Language Evolution: Why Hockett’s Design Features are a Non-Starter. Biosemiotics., 8, 29–46. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Wagner RK, Torgesen JK, & Rashotte CA (1999). The Comprehensive Test of Phonological Processing (CTOPP). Austin, TX: Pro-Ed. [Google Scholar]
  41. Walley AC (1993). The role of vocabulary development in children’s spoken word recognition and segmentation ability. Developmental Review, 13, 286–350. [Google Scholar]
  42. Walley AC, Metsala JL, & Garlock VM (2003). Spoken vocabulary growth: Its role in the development of phoneme awareness and early reading ability. Reading and Writing: An Interdisciplinary Journal, 16, 5–20. [Google Scholar]
  43. Wolf M & Bowers PG (1993). The double-deficit hypothesis for the developmental dyslexias. Journal of Educational Psychology, 91, 415–438. [Google Scholar]
  44. Zhou Y, McBride C, Leung JSM, Wang Y, Joshi M, & Farver J (2018). Chinese and English reading-related skills in L1 and L2 Chinese-speaking children in Hong Kong. Language, Cognition, and Neuroscience, 33, 300–312. [Google Scholar]

RESOURCES