Abstract
Purpose
Caregivers may show greater use of nonauditory signals in interactions with children who are deaf or hard of hearing (DHH). This study explored the frequency of maternal touch and the temporal alignment of touch with speech in the input to children who are DHH and age-matched peers with normal hearing.
Method
We gathered audio and video recordings of mother–child free-play interactions. Maternal speech units were annotated from audio recordings, and touch events were annotated from video recordings. Analyses explored the frequency and duration of touch events and the temporal alignment of touch with speech.
Results
Greater variance was observed in the frequency of touch and its total duration in the input to children who are DHH. Furthermore, touches produced by mothers of children who are DHH were significantly more likely to be aligned with speech than touches produced by mothers of children with normal hearing.
Conclusion
Caregivers' modifications in the input to children who are DHH are observed in the combination of speech with touch. The implications for such patterns and how they may impact children's attention and access to the speech signal are discussed.
Hearing loss greatly affects children's access to speech input (Eisenberg, 2007) and places children who are deaf or hard of hearing (DHH) in dramatically different language learning situations than peers with normal hearing (NH). Limited access to speech impacts children's learning of linguistic constructs and reduces their “cumulative linguistic experience” (for a review, see Moeller & Tomblin, 2015). This linguistic experience may be further influenced by a hearing mismatch between caregivers and their children who are DHH, given that most of these caregivers are individuals with NH who do not use sign language and have not had previous experience with hearing loss (Mitchell & Karchmer, 2004).
Despite a lack of experience with individuals who are DHH, hearing caregivers may modify their input—both speech and nonspeech—to accommodate the needs of a child who is DHH. Such modifications to the input may depend on the nature of the interaction (free-play, feeding, diaper changing, book reading, etc.), the specific constructs being examined, and the way those constructs are defined (e.g., how amount of speech is measured). We speculate that caregivers of children who are DHH may adjust their input by presenting linguistic units within a richer multimodal context to compensate for the reduced access to auditory input that their children experience. Utilizing a simple free-play interaction, we explored whether such multimodal patterns exist in the input to children who are DHH, focusing on one multimodal cue that may occur with speech: touch.
Children who are DHH are frequently fitted with hearing aids (HAs) or cochlear implants (CIs) based on the severity of their hearing loss. As mentioned, the input to children who are DHH can be modified in a variety of ways, and such modifications can be measured using a wide array of constructs. Studies that have examined features of the input to this population have mostly focused on those fitted with CIs with a smaller number of studies examining children fitted with HAs. In the sections below, we review this literature, along with another body of work that examined the input directed to children who are DHH not fitted with devices.
Modifications in the Input to Children With CIs
Mothers of children with CIs have been reported to be as talkative (in terms of number of utterances, total duration of speech, and number of turns) with their children as mothers of peers with NH (Fagan, Bergeson, & Morris, 2014; Vanormelingen, De Maeyer, & Gillis, 2016). Yet, they seem to produce shorter and simpler utterances (Fagan et al., 2014; Lund & Schuele, 2015; Vanormelingen et al., 2016), fewer syllables per utterance (Bergeson, Miller, & McCune, 2006; Kondaurova, Bergeson, & Xu, 2013; Vanormelingen et al., 2016), and fewer word types than mothers of age-matched children (Lund & Schuele, 2015).
The prosodic features of speech directed to children with CIs have also been examined. Mothers of children with CIs modulate their prosody similarly to mothers of children matched on hearing experience rather than age; these prosodic modifications are attested in pitch height, pause duration (Bergeson et al., 2006), average pitch, pitch range (Kondaurova et al., 2013), and prosodic markers of clause boundaries (i.e., preboundary vowel duration and pitch; Kondaurova & Bergeson, 2011). These findings suggest that mothers are sensitive to their children's hearing experience and utilize that sensitivity to modify their speech signal.
On the other hand, when examining the use of nonspeech cues in the input, studies show that the auditory–visual input to children with CIs does not differ from that provided to age-matched peers but is significantly different from that provided to children matched on vocabulary size (Lund & Schuele, 2015). This finding suggests that the different cues and features of the input are not modified in a unified fashion and that examining unimodal speech input alone does not tell the whole story; hence, the importance of thoroughly studying the features of multimodal child-directed input cannot be understated.
Modifications in the Input to Children With HAs
Studies that have examined the input to children who are fitted with HAs include children with a wide range of severity of hearing loss (mild to severe). Yet, despite such variability in samples, these studies provide some important insights. Features of child-directed speech such as adult word count and conversational turns (as measured by the LENA system) in the input to 2.5-year-olds with HAs are not different from those obtained from the input to age-matched peers with NH (Vandam, Ambrose, & Moeller, 2012). Similarly, total number of utterances, total number of words, and length of utterances in morphemes do not differ between these two groups at 18 months (Ambrose, Walker, Unflat-Berry, Oleson, & Moeller, 2015). However, at 3 years of age, some differences start to emerge between the groups, such that the total number of words in the input to children with NH is higher than that in the input to children with HAs (Ambrose et al., 2015). These findings suggest that some modifications to the input may depend on a child's age and developmental stage.
A case study of one mother interacting with her twin boys—one with a bilateral hearing loss who was fitted with HAs and the other with NH—investigated the prosodic features of the input and showed no differences in the speech directed to both children; yet, the mother was rated as more emotionally available when interacting with her child with NH, and she was judged to be working harder to maintain the attention of his twin (Lam & Kitamura, 2010). A specific investigation of acoustic vowel space area and vowel space distribution in a larger sample also showed no differences between the mothers of children with HAs and mothers of age-matched peers with NH (Kondaurova, Bergeson, & Dilley, 2012). These results show that, at least in the acoustic domain, mothers do not seem to modify their speech signal to children with HAs. However, when it comes to incorporating nonspeech cues in their input, mothers of children with HAs use visual attention-getting strategies more often than mothers of children with NH (Koester & Lahti-Harper, 2010). This last finding suggests that, even if the speech signal is not modified in the input to children with HAs, cues that accompany speech may be modified.
Modifications in the Input to Children Who Are DHH but not Fitted With Devices
A third group that has been studied includes children with severe–profound hearing loss who were not fitted with assistive devices when they were studied. This body of work examines social, emotional, and engagement-related features of maternal behaviors along with the use of nonspeech cues. The results show that, in general, mothers of children who are DHH spend more time in directing their children's attention compared to mothers of children with NH (Spencer, Bodner-Johnson, & Gutfreund, 1992). For example, mothers use gestures, attention-getting touches (Lederberg & Everhart, 1998), and other attention getters (Goldin-Meadow & Saltzman, 2000) more frequently and initiate more interactions with children who are DHH than their peers with NH (Goldin-Meadow & Saltzman, 2000). Furthermore, mothers of children who are DHH are more likely to move objects into their children's line of vision and to use tapping or pointing to an object compared to mothers of children with NH (Waxman & Spencer, 1997). Interestingly, mothers of children who are DHH modify acoustic properties of speech directed to their children preimplantation, hence before children can even access such modifications (Kondaurova & Bergeson, 2011). They also use vocal games more frequently even though their children do not have access to the speech signal; yet, such games are multimodal in nature and often include tactile cues (Koester, Books, & Karkowski, 1998).
All of these findings suggest that mothers of children who are DHH try to adjust their interaction style in ways that may be beneficial for their children. Yet, other studies show that this group of mothers has lower sensitivity ratings (Meadow-Orlans, 1997; Meadow-Orlans & Spencer, 1996; Paradis & Koester, 2015) and lower participation, affect, and flexibility scores than mothers of children with NH (Meadow-Orlans, 1997); they are rated as having a harder time accommodating the needs of their children and as being less prepared to use strategies to facilitate language learning (Prendergast & McCollum, 1996).
Interim Summary
The literature reviewed above paints a complex picture of the features of the input directed to children who are DHH. First, it is evident that there are a variety of ways in which caregivers can modify the input to their children who are DHH. Second, findings showing that caregivers modify both their speech and nonspeech input even when their children are not fitted with assistive hearing devices suggest that changes to the input may occur regardless of the type or presence of an assistive hearing device. Hence, it is critical to characterize the modifications in the input to children who are DHH in more detail, but also in a general fashion. Here, we examine the use of tactile cues in maternal input to children who are DHH and explore how those cues are combined with speech. Our approach is driven by the hypothesis that the mere presence of hearing loss acts as a perturbation to features of dyadic interaction and may cause mothers to alter their input regardless of the type of hearing device their children are fitted with. Note that a recent investigation in Smith and McMurray (2018) took a similar approach and mixed children with CIs and children with HAs in one group and compared them to their age-matched peers with NH. The results of this study showed no significant main effects of hearing status on measures of temporal responsiveness but revealed a greater variability in these measures among the children who are DHH.
Multimodal Child-Directed Input
A simple observation of any interaction with a child, regardless of hearing status, reveals that speech does not stand alone. Infant-directed communication includes gestures (O'Neill, Bard, Linnell, & Fluck, 2005), facial expressions (Chong, Werker, Russell, & Carroll, 2003; Nomikou & Rohlfing, 2011), touches (Abu-Zhaya, Seidl, & Cristia, 2017; Nomikou, Koke, & Rohlfing, 2017; Nomikou & Rohlfing, 2011), and actions (Gogate, Bahrick, & Watson, 2000; Nomikou & Rohlfing, 2011). Some of these nonspeech cues that accompany speech have been shown to systematically mark edges of linguistic units (Abu-Zhaya et al., 2017; Nomikou & Rohlfing, 2011) or exemplify the meaning of verbs (Nomikou et al., 2017) for infants. For instance, during diaper changing interactions, mothers use eyebrow raises to mark the edges of utterances by virtue of temporally aligning their facial expressions with the beginnings and ends of utterances (Nomikou & Rohlfing, 2011). Furthermore, mothers align their production of verbs during diaper changing with the implementation of actions that refer to those verbs (e.g., picking up a clean diaper while announcing “so then, we take the diaper”; Nomikou et al., 2017). This body of work shows that even caregivers of typically developing infants with NH use nonspeech cues in ways that may scaffold their children's access to speech. In this study, we explore the use of one specific nonspeech cue, touch, in combination with speech in caregiver–child interactions. Our main objective is to identify the frequency of use of touch cues and characterize the patterns by which touch and speech are combined in the input to children who are DHH and their peers with NH.
Why Focus on Touch?
The motivation to focus on touch stems from its prevalent use by caregivers when interacting with their children (e.g., Ferber, 2004; Ferber, Feldman, & Makhoul, 2008) and its significant effects on children's development (for a review, see Field, 2010; Feldman, Eidelman, Sirota, & Weller, 2002; Feldman, Rosenthal, & Eidelman, 2014). The importance of touch in early dyadic interactions and its impact on development may be traced back to the fact that touch is one of the first senses to develop (Gottlieb, 1971); hence, of all sensory modalities, touch is the least likely to be affected by lack of stimulation or sensory deficits to other senses, as is the case of later developing sensory systems that are more susceptible to influence from earlier developing systems (Bremner, Lewkowicz, & Spence, 2012).
A recent body of work, utilizing a location- and type-based classification of touch and focusing on the co-occurrence of touch with speech, revealed that mothers align their touch cues with their speech, producing temporally packaged multimodal linguistic units (Abu-Zhaya et al., 2017; Nomikou & Rohlfing, 2011). The alignment of cues in this manner goes beyond exerting positive effects on the general quality of dyadic interactions; it provides the infant with cues to the edges of linguistic units. The significance of this multimodal cue alignment to the segmentation problem should not be underestimated. Although several cues embedded in the speech signal itself have been found to aid in the segmentation of word forms (e.g., Bortfeld, Morgan, Golinkoff, & Rathbun, 2005; Pelucchi, Hay, & Saffran, 2009; Seidl & Johnson, 2006), no single cue has been found to be reliable at all times (Mattys, White, & Melhorn, 2005) or true to the complex nature of language input in real-life interactions (Johnson, 2012). Hence, the combination of speech with touch cues (and perhaps also other nonspeech cues) in a systematic manner may be of great help to the language learner in general and may even be of greater aid to the language learner who has a reduced access to the speech signal, as is the case of children who are DHH.
In the input to infants with NH, touch has often been classified as an attention-getting behavior, a classification that necessitates the presence of an infant response to a touch in order for that event to count as attention getting (Jean & Stack, 2012; Jean, Stack, & Fogel, 2009). Typically, auditory input can serve the purpose of garnering attention to an event or an entity (think of a moment you wanted a friend to notice a spider crawling on the wall beside them, so you yelled at them to get their attention). Yet, when access to such input is impoverished or absent, as in the case of children who are DHH, other sensory modalities might come to the rescue; touch cues can easily serve this purpose. In fact, studies that have examined the use of touch cues with children who are DHH almost exclusively classified touch as an attention-getting behavior (e.g., Goldin-Meadow & Saltzman, 2000; Lederberg & Everhart, 1998; Loots & Devisé, 2003; with the exception of Koester, Brooks, & Traci, 2000). Most of these studies report that deaf parents use visual–tactile strategies (i.e., tapping, entering into the child's visual field, and waiting for the child to be watching before introducing linguistic input) to get the attention of their child who is DHH more frequently than parents with NH who have children who are DHH (Loots & Devisé, 2003; Prendergast & McCollum, 1996). Other studies suggest that deaf mothers use attention-getting touch more frequently than mothers with NH regardless of the hearing status of their children (Koester & Lahti-Harper, 2010; Paradis & Koester, 2015; Waxman & Spencer, 1997). Interestingly, mothers with NH who have children who are DHH (without assistive devices) also use touch cues as attention-getting behaviors more frequently than mothers with NH who have children with NH (Goldin-Meadow & Saltzman, 2000; Lederberg & Everhart, 1998) and accommodate their children's hearing status by engaging them via multiple sensory modalities (Depowski, Abaya, Oghalai, & Bortfeld, 2015). Despite the inconsistencies within this body of work and the fact that some of the studies had very small samples (e.g., n = 4 in Depowski et al., 2015), these findings cannot be ignored. In general, this body of work shows that we are likely to observe differences in the use of touch cues in interactions with children who are DHH as opposed to their peers with NH but that such differences may depend on the exact classification of touch. Yet, further research is needed to specifically characterize how caregivers with NH alter their communication style to accommodate their children who are DHH by utilizing touch and speech in their input.
In summary, the hearing status of children who are DHH seems to impact caregiver behavior, such that caregivers, regardless of their own hearing status, utilize various features of speech and touch cues (among other nonspeech cues) in the input to their children. However, there are other ways in which touch cues can be used in the input to children who are DHH. As mentioned, in the input to infants with NH, touch is aligned with linguistic units (Abu-Zhaya et al., 2017; Nomikou & Rohlfing, 2011)—a pattern that may be informative and helpful to the language learner. First, if present in the input to children who are DHH, such temporally packaged multimodal input may simply boost children's attention to the speech stream. Specifically, given their reduced access to speech, a multimodal touch and speech signal can potentially provide children who are DHH with a sensory input that is highly redundant; touch can be felt and seen at the same time, hence is internally redundant, and if it occurs with speech, which stimulates the auditory system, then the entire multimodal event stimulates three sense modalities at the same time, creating an event that is hard to ignore. Second, the alignment of touch with the edges of linguistic units (i.e., utterances, phrases, words, syllables) may help children who are DHH identify and extract these units in a more efficient manner and may provide additional cues for speech segmentation.
The Current Study
Using a dyadic free-play interaction with a set of three quiet toys, we gathered data on the frequency of maternal touch and how it is aligned with speech directed to children who are DHH and their age- and gender-matched peers with NH. Our sample was a culturally homogenous group of White middle-class families. Such homogeneity allowed us to avoid any differences between the groups or the dyads that may stem from cultural diversity, given reports on culturally based differences in the frequency of use of touch in dyadic interactions (Franco, Fogel, Messinger, & Frazier, 1996; but see Goldin-Meadow & Saltzman, 2000, for a lack of such cultural effects on the use of attention-getting behaviors with children with severe-to-profound hearing loss). Our main question of interest was whether tactile and tactile–auditory input to children who are DHH differs from that provided to their peers with NH. In line with previous work (Abu-Zhaya et al., 2017; Nomikou & Rohlfing, 2011), we gathered video data from mother–child play interactions and annotated touch events in terms of their location and type without attempting to classify their functions or caregivers' intentions. The multimodality of language input (e.g., Gogate et al., 2000; Gogate, Maganti, & Bahrick, 2015) and the systematic temporally aligned touch and speech combinations produced by caregivers with NH when interacting with their infants with NH (Abu-Zhaya et al., 2017; Nomikou & Rohlfing, 2011) are the foundations for this study's predictions. Specifically, given that these patterns occur naturally, we predict that having a child who is DHH may not necessarily create new patterns in mothers' production of touch cues but may influence the frequency with which touch and the alignment of touch and speech are utilized. Mothers of children who are DHH may utilize touch cues more frequently in the input to their children and may also combine their touches with linguistic units more frequently.
Method
Participants
Twelve children who were DHH (nine males, three females; age range: 11.1–42.8 months, M = 27.93, SD = 9.37; see Table 1) and were fitted with either HAs (n = 6) or CIs (n = 6) were recruited from the Heuser Hearing (HH) Institute, Louisville, KY; the Department of Speech, Language, and Hearing Sciences at Purdue University, IN; and the Department of Otolaryngology–Head and Neck Surgery, Indiana University School of Medicine (IUSM). 1 Mothers of all children reported having NH, identified as non-Hispanic/White, and had between 12 and 18 years of education (M = 15.36, SD = 2.2; see Table 1). All children who were DHH were enrolled in educational programs using oral communication at the time of the visit; two children also received some American Sign Language input. As shown in Table 2, children who are DHH had a wide range of severity of hearing loss (mild–profound), 2 with a mix of laterality.
Table 1.
Demographic information for the children who are deaf or hard of hearing (DHH) and their age- and gender-matched peers with normal hearing (NH).
| ID | DHH |
NH |
|||||
|---|---|---|---|---|---|---|---|
| Device | Age | Gender | Maternal education | Age | Gender | Maternal education | |
| 1 | HA | 11.1 | M | 16 | 11.12 | M | 12 |
| 2 | HA | 14.7 | M | 12 | 15.16 | M | 16 |
| 3 | CI | 20.4 | M | 16 | 19.34 | M | 18 |
| 4 | CI | 25.07 | F | 15 | 25.23 | F | 18 |
| 5 | CI | 27.13 | M | 12 | 28.16 | M | 16 |
| 6 | HA | 27.3 | F | 14 | 27.93 | F | 16 |
| 7 | CI | 28.8 | M | 18 | 28.88 | M | 18 |
| 8 | CI | 31.41 | F | 18 | 31.38 | F | 12 |
| 9 | HA | 31.97 | M | 18 | 31.61 | M | 16 |
| 10 | HA | 34.34 | M | 12 | 34.31 | M | 12 |
| 11 | HA | 40.2 | M | 16 | 40.53 | M | 16 |
| 12 | CI | 42.8 | M | 14 | 41.9 | M | 17 |
Note. Children's age is in months, and maternal education is in years. HA = hearing aid; M = male; CI = cochlear implant; F = female.
Table 2.
Characteristics of individual participants who are deaf or hard of hearing.
| ID | Age of identification | Laterality | Etiology | Degree of hearing loss | Device | Device type | Age of fitting | Early intervention a | Communication method b |
|---|---|---|---|---|---|---|---|---|---|
| 1 | 5 weeks | Unilateral | Nerve damage | Moderate–severe | HA | L, Re Sound | 2 mos | NR | Oral |
| 2 | At birth | Bilateral | Unknown | Mild–moderate | HA | Oticon Sensei | 3 mos | None | Oral + ASL |
| 3 | 6 weeks | Bilateral | Unknown | Severe–profound | CI | Nucleus 6 | 15 mos | Speech | Oral |
| 4 | 3 weeks | Bilateral | Unknown | R moderate–severe L severe–profound |
CI | Advanced Bionics Naida | 2 mos | Speech | Oral + ASL |
| 5 | At birth | Bilateral | ANSD | Moderate–severe | CI | NR | R 21 mos | Speech | Oral |
| 6 | < 5 mos | Unilateral | Unknown | R mild–moderate | HA | R, Oticon Sensei | 5 mos | Speech | Oral |
| 7 | 9 mos | Bilateral | Congenital CMV | Severe–profound | CI | Nucleus 6 | 11 mos | NR | Oral |
| 8 | At birth | Bilateral | NR | Severe–profound | CI | NR | R 10 mos | Speech | Oral |
| 9 | At birth | Bilateral | Unknown | Mild–moderate | HA | Oticon Sensei | 2 mos | Speech | Oral |
| 10 | < 6 mos | Bilateral | Unknown | Mild | HA | Oticon Safari | 6 mos | Speech | Oral |
| 11 | 5 weeks | Unilateral | Unknown | R moderate–severe | HA | R, Oticon | 4 mos | None | Oral |
| 12 | At birth | Bilateral | Usher syndrome | Severe–profound | CI | Cochlear N6 | 15 mos | Speech | Oral |
Note. HA = hearing aid; L = left ear; mos = months; NR = no record; ASL = American Sign Language; CI = cochlear implant; R = right ear; ANSD = auditory neuropathy spectrum disorder; CMV = cytomegalovirus.
Speech therapy was provided through First Steps.
Families that indicated using ASL specifically mentioned that the main communication channel was oral language and ASL was used less than 20% of the time.
Twelve children with NH (nine males, three females; age range: 11.12–41.9 months, M = 27.96, SD = 9.28; see Table 1) were recruited from the local community in West Lafayette, IN. Mothers of all children reported having NH, identified as non-Hispanic/White, and had between 12 and 18 years of education (M = 15.58, SD = 2.31; see Table 1). Each of these children was matched with one of the children who were DHH on both chronological age and gender.
A Wilcoxon rank-sum test showed no significant difference between the groups on maternal education (Z = −0.35, p = .725). Depending on the site, mothers were either reimbursed $20 or received a book or toy as a gift for their child.
Procedure
All mothers and their children participated in a dyadic play interaction (a common method in studying the input to children who are DHH; e.g., Bergeson et al., 2006; Harris & Chasin, 2005; Lam & Kitamura, 2010; Waxman & Spencer, 1997). Mothers were asked to play with their children using three quiet toys (a plastic cat, a plastic dog, and a soccer ball) as they would normally do at home while seated on the floor for 6 min. These toys were chosen because of their simplicity and familiarity to most children growing up in Western households, which allowed us to elicit a simple naturalistic play interaction. Our choice of 6 min as a time window for data collection fits with previous literature. In studies examining language-related measures from parent–child play interactions, parents are typically instructed to play with their children for periods that range between 5 and 10 min (Bornstein, Hahn, & Haynes, 2004; Depowski et al., 2015; Jean et al., 2009; Kondaurova et al., 2013); some researchers choose to collect data from longer periods but eventually analyze only 5–7 min of the sample (Loots & Devisé, 2003; Tamis-LeMonda, Kuchirko, & Tafuro, 2013). This large body of literature shows that 5–10 min of play interactions can produce a wealth of data to allow for microgenetic coding of caregivers' behaviors. Yet, it is necessary to remain cautious about how representative such samples are of children's daily routines.
Researchers who interacted with the families did not mention an interest in examining features of child-directed vocal and/or tactile input, nor did they mention the comparison between input directed to children who were DHH and peers with NH. Interactions were audio- and video-recorded. Mothers' speech was recorded using an SLX Wireless Microphone System (Shure; HH and IUSM). This system included an SLX1 Bodypack transmitter with a built-in microphone and a wireless receiver SLX4, which was connected to a Panasonic HC-V750 full HD camcorder (HH) or a Canon 3CCD Digital Video Camcorder GL2, NTSC (IUSM). At Purdue University, mothers wore a clip-on lavalier microphone (AKG SR40 Flexx) that was wirelessly connected to a Toshiba Camileo X200 full HD camcorder.
Coding
All undergraduate research assistants (RAs) who coded the video and audio files were trained by the first author based on a previously developed training plan. RAs were trained on a data set from a different project and then moved to coding data from the current project once they reached reliability. Furthermore, the first author held monthly meetings with RAs to discuss any issues with coding and resolve discrepancies. Eight pairs of RAs worked on annotating maternal touches in ELAN, and six other RAs worked on annotating maternal speech.
Touch Coding
A template was created in ELAN (Brugman & Russel, 2004) allowing for unified annotation of touch events across videos. The template was based on that used by Abu-Zhaya et al. (2017) with a few modifications implemented to fit the current project. The template consisted of two sets of two identical tiers, allowing the annotation of touch events that were produced using two hands; for instance, in the event the mother produced temporally overlapping touch events with both hands, each touch was annotated on a separate set of tiers. The two tiers allowed us to log the following information about each touch event: its location (arm, face, foot, hand, head, leg, and torso) and its type (divided into with or without a toy: brush, grab, squeeze, tap, brush with toy, tap with toy, etc.). A full list of touch types and a description of each type can be found in the Appendix.
RAs who were trained by the first author performed all annotations of intentional maternal touch events. Intentional touches were defined as those in which the coder judged that the mother intentionally touched her child on any location on her child's body. Touches that were judged to have resulted from accidental body contact between the mother and her child and those that were initiated by the child were not annotated. All annotations of touch events were performed by pairs of RAs who watched silent videos of the interactions and annotated events only after reaching consensus regarding their features. When there was any confusion or uncertainty regarding the nature of the touch event within the pair, it was settled through consulting another pair of RAs. Upon completion of annotation of touch events, a Praat textgrid was extracted from the ELAN file for each dyad.
Speech Coding
Mothers' speech was annotated using Praat 5.4.04 (Boersma & Weenink, 2005) by RAs trained in acoustics. In order to study the multimodal patterns in the input to children who were DHH and their peers with NH and to specifically explore whether touch events were aligned with speech differently in the two groups, we relied on the timing of touch events to explore their proximity to units of speech. To perform these analyses, we used Praat textgrids that were extracted from the touch annotation of each video file in ELAN, along with the audio files from each interaction, and created a new text tier on which we annotated all the utterances that occurred within 0.5 s from the edges of a touch event. Utterances were defined as sequences of words that were less than 0.3 s apart (as in Abu-Zhaya et al., 2017). The end product was a new Praat textgrid that included the annotation of touch events and utterances that occurred in proximity to those touch events.
Coding Reliability
About 20% of the sample (n = 5) was annotated by the first author for reliability. Reliability measures were calculated based on annotations of the videos by the first author as compared with the annotations of a randomly chosen sample of videos annotated by RAs. We first computed an intraclass correlation coefficient to determine the amount of variance in the number of touch events that can be attributed to differences between dyads or differences that are a function of coding. Our analyses revealed that about 98% of the variance in the number of touch events can be attributed to the natural variation between participants, whereas the remaining 2% can be attributed to variance between coders. Second, we inspected the reliability of the timing of touch events by examining the difference in both the beginning and end time stamps of the same events as annotated by the first author and the RAs. We created a difference score for each beginning time stamp by subtracting the first author's results from those of the RAs; a similar difference score was created for the end time stamp. When examining the distribution of these difference scores, we found that the 10th–90th percentile range of the difference in beginning time stamps is −0.139 to 0.1866 and that of the difference in ending time stamps is −1.465 to 0.1602. These results suggest that the timing of the majority of touch events was captured reliably by both groups of coders, namely, first author and RAs. These results, along with the previous findings regarding the number of events, imply that the alignment analyses are highly reliable as well. Finally, Cohen's κ was calculated to determine the quality of agreement between the annotations of the RAs and the first author in terms of the location and type of touch events. In line with the statistical analyses of these data, the type of touch was reclassified as either with or without a toy, and the locations of touches were grouped into four categories. Results showed there was very good agreement between coders in judging the location, κ = .943, 95% CI [0.866, 1.0], p < .0001, and the type, κ = .954, 95% CI [0.866, 1.0], p < .0001, of touch events.
Data Extraction
Data from the speech and touch coding were extracted using custom-written Praat scripts.
Frequency of Touches
We extracted each of the touch events and logged its location, type, and beginning and end times, as well as its duration. We then tabulated the frequency of touch events per dyad, as well as the total duration of touches.
Touch–Utterance Alignment
We examined the temporal alignment between touch events and utterances using a constant criterion of 0.5 s; this criterion was chosen following careful examination and consideration of the literature. Due to a lack of understanding of the nature of temporal alignment between auditory–tactile events in the input to children and how such alignment may be perceived by the child, we resorted to findings on audiovisual input to typically developing infants and infants' detection of audiovisual temporal synchrony. These lines of work revealed discrepancies and inconsistencies in the choices made regarding the temporal window defining synchrony or asynchrony. For instance, in their exploration of verb–action temporal alignment in the input to young infants, Nomikou et al. (2017) judged events that are 2 s apart as asynchronous, whereas Gogate et al. (2000) used a tighter window of greater than 0.5 s when judging object labels and object motion to be asynchronous. On the other hand, when testing infants' sensitivity to temporal synchrony in mapping syllables to objects, Gogate, Prince, and Matatyaho (2009) created asynchronous stimuli by using a time window of 1.2 s. More broadly though, infants have been shown to be able to detect audiovisual speech asynchrony when the temporal window is about 0.6 s or greater (Lewkowicz, 2010). Based on these results, along with the narrowing of the temporal window through which events are judged to be synchronous or asynchronous in development (Lewkowicz, 1996), we reasoned that, when the input in our sample is multimodal, it will be tightly packaged in time; hence, we chose to use a 0.5 s window to explore the temporal relationships between utterances and touch events.
Using the annotation of utterances in proximity to touch and the time window of 0.5 s, we examined whether each touch event was aligned with an utterance. First, we examined whether an utterance occurred within 0.5 s from the beginning, end, or midpoint of each touch event. If no utterance overlapped with the touch at any of these time points, we explored whether there was an utterance that occurred at any point during the touch. If a touch was found to overlap with an utterance following these criteria, they were considered to be temporally aligned; otherwise, the touch was judged to have occurred without any overlapping utterances. During this step, we logged the duration and specific features of events (location and type of touch events, content of speech).
Data Analysis
In order to explore differences in speech and touch input to the two groups (NH and DHH), we compared the input to children who were DHH to the input to their age- and gender-matched peers with NH. Given that children were matched on age and gender, we could treat the two groups as paired samples and use the relevant statistical tests for matched or paired observations to determine whether the groups differed on our measures of interest. This allowed us to compare each child who was DHH to their respective age- and gender-matched peer with NH instead of merely testing group differences. When measures were not normally distributed, nonparametric tests were used. We specifically tested whether the input to the groups differed in terms of the (a) number of touch events and total touch duration, (b) types and locations of touch events, and (c) proportion of touch events that were aligned with utterances out of the total number of touch events per dyad.
Results
Touch Frequency and Duration
For each dyad, we tabulated the frequency of touch events produced during the play interaction, as well as the total amount of time each mother spent touching her child. Given that the normality assumption was not met for either the frequency of touch events or the total duration of touch in the input to both children who were DHH and their peers with NH, we tested the differences between the groups using nonparametric tests. A sign test examining the difference between the medians of the two paired samples in terms of the frequency of touch yielded a nonsignificant result (M = 1, p = .774). However, a visual observation of the data reveals a larger variability in the frequency of touch in the input to children who were DHH (see Figure 1). This observation was supported by Levene's test of equal variances, F(1, 22) = 10.10, p = .0043, which revealed a significant difference in variance between these two groups of mothers, such that there was a greater variance in the use of touch among mothers of children who were DHH (M = 42.66, SD = 57.11, range: 0–189) compared to mothers of children with NH (M = 13.66, SD = 14.96, range: 0–53). Similar results were obtained for the measure of touch duration; a Wilcoxon signed-ranks test showed that the medians of the total duration of touch provided to children who were DHH and their peers with NH did not differ (S = 15, p = .266); however, Levene's test of equal variances revealed a greater variance in the duration of touch among mothers of children who were DHH (M = 41.76, SD = 58.05, range: 0–188.4 s) compared to mothers of children with NH (M = 11.402, SD = 16.722, range: 0–59.05 s), F(1, 22) = 10.836, p = .0033.
Figure 1.
Frequency of touch events produced by mothers of children who are deaf or hard of hearing (DHH; gray) and mothers of children with normal hearing (NH; white). Each error bar is constructed using 1 SEM.
Touch Type and Location
Despite the greater variance in the use of touch among mothers of children who were DHH for both measures of frequency and total duration, the patterns in which touch was used were similar between the two groups. When we examined the location of touches (the body part that the mother touched collapsed into four categories: arm and hand, face and head, foot and leg, and torso) and the proportion of touch events to each location category out of the total number of touches each mother produced, we identified a strikingly similar pattern across the two groups. As evident from Figure 2, the majority of touches produced by mothers from both groups occurred on the child's torso (DHH: M = 44%, SD = 26%; NH: M = 40%, SD = 23%), followed by touches on the child's feet and legs (DHH: M = 30%, SD = 19%; NH: M = 28%, SD = 28%), and then followed by touches on the child's arms and hands (DHH: M = 21%, SD = 18%; NH: M = 22%, SD = 21%); the least common locations of touches in both groups were the head and face (DHH: M = 3%, SD = 4%; NH: M = 8%, SD = 11%).
Figure 2.
The proportions of location of touches delivered by mothers in each of the groups (deaf or hard of hearing [DHH] and normal hearing [NH]) divided into four location groups: arm and hand, face and head, foot and leg, and torso.
Moreover, when examining patterns in the mechanism of touch delivery, that is, with or without a toy in hand, we found that mothers in both groups (DHH and NH) employed the two mechanisms in an identical pattern, such that most of their touches were produced without a toy in hand (DHH: M = 71%, SD = 27%; NH: M = 69%, SD = 23%; see Figure 3). These findings highlight similar patterns for maternal touch locations, as well as mechanisms of touch delivery during free-play interactions regardless of children’s hearing status.
Figure 3.
The proportions of the mechanism of touch use (with or without toy) by mothers in each of the groups (deaf or hard of hearing [DHH] and normal hearing [NH]).
Touch–Utterance Alignment
Finally, we explored whether mothers aligned touch events with their utterances. Upon extracting the utterances that overlapped with touch events, for each dyad, we tabulated the proportion of touch events that overlapped with speech out of the total number of touches that mothers produced. A Wilcoxon signed-ranks test showed that mothers of children who were DHH were more likely to align their touches with utterances than mothers of children with NH (DHH: M = 0.863, SD = 0.158; NH: M = 0.658, SD = 0.261; Wilcoxon signed-ranks test: S = 18, p = .0391, r = .495 [a large effect size]; see Figure 4). Unlike previous measures, here, we did not find evidence for unequal variance between the groups, as shown by Levene's test of equal variances, F(1, 16) = 0.528, p = .478. Importantly, when examining how well aligned the streams were by calculating the time difference between the edges of each touch event and those of the utterance aligned with it, we found that, when producing such multimodal events, mothers in both groups aligned the beginning of their touches with the beginnings of utterances (DHH: M = 0.46, SD = 0.56; NH: M = 0.39, SD = 0.49) and the ends of their touches with the ends of utterances (DHH: M = 0.23, SD = 0.75; NH: M = 0.06, SD = 0.36) within a window that was smaller than 0.5 s. Paired-samples t tests revealed no significant differences between mothers of children who were DHH and mothers of children with NH in how well they aligned the beginnings, t(7) = −0.15, p = .883, and endings, t(7) = 0.16, p = .875, of their touches with their utterances (see Table 3 for a summary of the main results).
Figure 4.
The proportion of touches that overlapped with utterances in the input to children who are deaf or hard of hearing (DHH; gray) and their peers with normal hearing (NH; white). Each error bar is constructed using 1 SEM.
Table 3.
Means, standard deviations, confidence intervals, and effect sizes for each of the variables.
| Measure | DHH |
NH |
Effect size | ||
|---|---|---|---|---|---|
| M (SD) | 95% CI | M (SD) | 95% CI | ||
| Touch frequency | 42.66 (57.11) | [6.37, 78.95] | 13.66 (14.96) | [4.16, 23.17] | NA |
| Touch duration | 41.76 (58.06) | [4.88, 78.65] | 11.4 (16.72) | [0.77, 22.02] | NA |
| Touches aligned with utterances | 0.86 (0.15) | [0.74, 0.98] | 0.65 (0.26) | [0.45, 0.86] | r = .495 |
Note. DHH = deaf or hard of hearing; NH = normal hearing; CI = confidence interval; NA = not applicable.
Discussion
The current study examined the frequency of touch use and its combination with speech in the input to children who are DHH and age- and gender-matched peers with NH. Based on previous work, we predicted that, due to their children's sensory deficit, mothers of children who are DHH would utilize touch more frequently when interacting with their children than mothers of children with NH. Furthermore, we predicted that, given their children's reduced access to auditory input, mothers of children who are DHH would be more likely to align their touches with speech.
The results of this study first revealed that, contrary to our prediction, there is no statistically significant evidence for a group difference in the frequency and duration of touch between mothers of children who are DHH and mothers of age-matched peers with NH. Yet, we found a statistically significant difference in the variability of the frequency and total duration of touches. Specifically, compared to mothers of children with NH, mothers of children who are DHH demonstrated a larger variability in their use of touch cues. Second, we found evidence for a statistically significant difference in the proportion of touch events that were aligned with utterances: Mothers of children who are DHH were more likely to align their touches with utterances compared to mothers of peers with NH. However, when mothers aligned their touches with utterances, they produced events that were well aligned regardless of their children's hearing status. This suggests that the difference between the groups was only in the proportion of touches that were aligned with speech and not in the quality of such alignment.
Although the current study does not allow us to ascertain the source of variability in using touch, there are several possible explanations we can suggest. First, it is unlikely that the larger variability in the DHH group stems from the small sample size or the wide age range of children in the sample because the NH group was equal in size and included the same age range without displaying such large variability in the use of touch. However, it is possible that differences in age of receiving an assistive device, cognitive and language levels, and/or differences in intervention strategies (factors beyond the scope of the current study) affected (separately or together) the tactile behavior of the mothers in the DHH group. These factors often contribute to explain variability in measures obtained from this type of pediatric population (e.g., Kirk, Miyamoto, Ying, Perdew, & Zuganelis, 2000). In addition, previous studies have demonstrated that mothers of children who are DHH spend a considerably greater proportion of their time utilizing multimodal forms of communication when interacting with their children (Depowski et al., 2015). Consequently, it is possible that the results of the current study, suggesting a greater variability in the amount of touch in the DHH group compared to the NH group, stem from mothers' greater emphasis on multimodal (visual, vocal, and tactile) strategies of communication and the use of other strategies and cues that we did not explore. 3 Future research with samples that are more homogenous in terms of the length of hearing experience, assistive devices, and language and cognitive levels may help shed light on this variability and better explain its source.
Our findings regarding the alignment of touch events with speech have several implications. Previous studies have suggested that mothers with NH may be less sensitive to the needs of their children who are DHH, especially in the use of touch as an attention-getting behavior (Koester & Lahti-Harper, 2010; Paradis & Koester, 2015; Waxman & Spencer, 1997). Yet, there seems to be no differences in the proportion of multimodal auditory–visual cues in the input to children who are DHH and those with NH (Lund & Schuele, 2015; but see Depowski et al., 2015, for evidence to the contrary). The current results suggest that, by producing a higher proportion of touches that are well aligned with utterances, mothers of children who are DHH may be adjusting their multimodal input in a way that may scaffold their children's access to speech input. Furthermore, these results suggest that there are differences in how caregivers utilize the various sense modalities when interacting with their children. Using a sensory modality that their children can easily access (touch) and aligning it with specific linguistic units (utterances), mothers of children who are DHH create a multimodal input that can potentially serve their children in several ways. Although we do not yet understand the rationale for mothers' production of such temporally packaged multimodal events, we speculate that this behavior may have a positive influence on the child who is DHH for several reasons. First, touch cues stimulate an intact sense (touch is the first sense to develop and is the least likely to be influenced by deficits to later developing sensory systems; Bremner et al., 2012) and are thus less likely to be ignored by the child who is DHH. Touch cues are typically inherently redundant, because they can be processed through multiple sense modalities (they can be felt and seen); hence, children may allocate more attentional resources to processing touch cues. Second, when touch is combined with speech, adding another layer of multimodality to the event, the child receives stimulation to another sense modality. Such redundancy in the signal, exemplified in the simultaneous temporally synchronous stimulation of multiple sense modalities, has been suggested to be an efficient recruiter of children's attention (Bahrick & Lickliter, 2000).
These features of touch, alone and when combined with speech, provide a solid rationale for the idea that touch may serve as an attention-getter for children who are DHH. Indeed, previous studies have proposed that touch serves as a means of getting the visual attention of a child who is DHH before using a sign or showing an object (e.g., Paradis & Koester, 2015); however, such claims were made based on the visible behaviors of caregivers and their children's responses without considering the redundancy embedded in the touch event itself. Given that a higher proportion of touches produced by mothers of children who are DHH was aligned with speech, and given that children in our sample were mostly exposed to oral communication, we speculate that, if mothers use touch, they may be doing so in order to get their child's attention to the speech signal. The multimodal temporally packaged touch and speech events that they create are even more salient and redundant than touch-only events because they stimulate more sense modalities at the same time and are therefore harder to ignore. Future experimental work designed to specifically explore how children who are DHH respond to the alignment of touch with speech will help confirm or refute these assumptions. Furthermore, the fact that touch and speech events in our sample were found to be well aligned (they occurred within less than 0.5 s) regardless of the child's hearing status suggests that this multimodal input may provide all children with a tight package of discrete events (in this case, utterances). Highlighting linguistic units in this manner could facilitate their segmentation from the continuous stream of speech—a strategy that if used systematically may help the child to locate the edges of linguistic units. Such benefit may be greater for the child who is DHH whose access to the speech signal and the segmentation cues that are embedded in the speech signal (e.g., statistical patterns; Pelucchi et al., 2009) is impoverished.
Support for a facilitative effect of touch cues that are systematically aligned with linguistic units on the segmentation of those units comes from a recent study with typically developing infants with NH. Specifically, in exploring whether infants benefit from the alignment of touch with speech for segmenting units out of the speech stream, Seidl, Tincoff, Baker, and Cristia (2015) demonstrated that infants are sensitive to the systematicity in touch and speech combinations in a continuous stream of syllables with no reliable cues to word boundaries. Infants listened differently to events in which the same trisyllabic sequence (e.g., dobita) was paired with a consistent touch on a fixed location, as opposed to varying trisyllabic sequences (e.g., nepoku) occurring with a touch on another location. These results show that infants as young as 4 months old can rely on touch in combination with speech for the detection of the edges of units in a continuous stream of speech. More importantly, the results show that, when multimodal packaging of specific linguistic units is provided in a systematic manner, the infant can use the signal to his or her advantage for speech segmentation. Future work can explore whether this kind of redundancy may be particularly useful to children who are DHH.
In summary, the results presented here show that mothers of children who are DHH adjust their input in ways that may be beneficial for their children in a variety of ways. Although these results are interesting and provide a wealth of ideas for further research questions, it is necessary to acknowledge the limitations of the current study and the alternative interpretations of the results. First, the sampling context of the current study and the small sample size may challenge the external validity of the findings and make it difficult to generalize our interpretation of the data to the general population of mothers of children who are DHH. Furthermore, the small sample size makes it difficult to explore differences between the groups that may be the result of age or developmental and cognitive level rather than hearing status. Second, the wide variability in the hearing experiences of children in our sample and the variability in their degree of hearing loss make it difficult to explore the question of whether our measures of interest depend on the child's hearing loss and length of hearing experience. Yet, it is also necessary to acknowledge that controlling for all the factors that may be contributing to the variability between children who are DHH is not an easy task. Furthermore, our sample was not homogenous or big enough to explore whether features of speech and touch input to children who are DHH are related to children's assistive devices. Studies with larger and more homogenous samples can help address these questions in detail and provide a thorough investigation of the issues presented in this article. Finally, future studies can also explore whether the variable patterns we observe in the input to children who are DHH exert an impact on later language development. This investigation could yield information helpful to caregivers and clinicians.
Acknowledgments
This research project was supported by a Collaboration Translational Research grant, “Infant-Directed Speech and Language Development in Infants With Hearing Loss,” to Amanda Seidl and Derek Houston (Grant UL1TR001108). The authors would like to thank all the families who participated in this study, the audiologists at the different sites for their help with recruiting these families, and all the research assistants and staff members for their help with gathering and analyzing the data.
Appendix
Touch Types
| Brush with toy | A motion with the toy that begins in one location and ends in another; it is performed with the toy in hand and involves toy to body contact. |
| Brush | A motion that begins in one location and ends in another; it is performed either with one finger or the whole hand. |
| Grab | The mom wraps her hand around a body part. It is likely to occur right before another type of touch, such as moving. |
| Hold | The mom embraces the child to keep her in a specific position (e.g., on her lap). |
| Move | The mom moves a child's body part(s) in any way (shaking,…). |
| Other | Any touch that cannot be identified as any of the other types. If it can be classified as another type of touch, other should not be used. |
| Other with toy | Any touch with the toy that cannot be identified as any of the other types. This type of touch should only be utilized if the toy is in contact with the child's body and the caregiver's hand is not touching. |
| Pinch | Squeezing with two fingers only. The annotation starts with the fingers stretched before the pinch (but in contact with the body) and ends with the fingers stretched again as in the initial position (still in contact with the body). |
| Poke | A motion with just one finger. The poke occurs when the tip of the finger touches in a “poking motion” rather than the entire finger tapping on the body part. The annotation starts with the touch on the body part and ends when the finger is pulled back, either to start a new poke or to end the whole touch. |
| Rest | Mom is resting her hand on any of the child's body parts. The mom's hand needs to be relaxed on the body in a resting position. |
| Squeeze | A motion with the whole hand (like a pinch except with the whole hand). The annotation starts with the hand stretched before the squeeze (but in contact with the body) and ends with the hand stretched again as in the initial position (still in contact with the body). |
| Tap with toy | A motion with the toy that starts with the actual touch of the toy on the body part and ends when the toy is pulled back, either to start a new tap or to end the whole touch. |
| Tap | A motion that can occur with one finger or even the whole hand. The tapping motion will not have the fingers bent as it would in a poke, but rather, the entire finger or hand will act on the body part at the same time. Like poking, the annotation starts with the actual touch on the body part and ends when the hand is pulled back, either to start a new tap or to end the whole touch. |
| Tickle | A fast-paced, back-and-forth motion of the fingers on any of the child's body parts. |
Funding Statement
This research project was supported by a Collaboration Translational Research grant, “Infant-Directed Speech and Language Development in Infants With Hearing Loss,” to Amanda Seidl and Derek Houston (Grant UL1TR001108).
Footnotes
Although the participants' age range (11.1–42.8 months) is somewhat wide and reflects a range of developmental and cognitive skills, similar studies have also tested children within the same age range (e.g., 10.3–37.1 months in Bergeson et al., 2006, and 16–43 months in Lund & Schuele, 2015).
Such variability in severity of hearing loss is not uncommon in the literature (Ambrose, VanDam, & Moeller, 2014; Ambrose et al., 2015; Koester et al., 1998; Meadow-Orlans, 1997; Meadow-Orlans & Spencer, 1996; Vandam et al., 2012; Waxman & Spencer, 1997).
Specifically, mothers of the 12 children with NH seem to use touch during the play interactions to a similar extent, hence the smaller variability in the count and duration of touch. On the other hand, it is possible that mothers of children who were DHH show greater variability in the use of touch because they are utilizing other cues, such as facial expressions or gestures, when interacting with their children. Examining this possibility is beyond the scope of this article.
References
- Abu-Zhaya R., Seidl A., & Cristia A. (2017). Multimodal infant-directed communication: How caregivers combine tactile and linguistic cues. Journal of Child Language, 44, 1088–1116. [DOI] [PubMed] [Google Scholar]
- Ambrose S. E., VanDam M., & Moeller M. P. (2014). Linguistic input, electronic media, and communication outcomes of toddlers with hearing loss. Ear and Hearing, 35(2), 139–147. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ambrose S. E., Walker E. A., Unflat-Berry L. M., Oleson J. J., & Moeller M. P. (2015). Quantity and quality of caregivers' linguistic input to 18-month and 3-year-old children who are hard of hearing. Ear and Hearing, 36, 48S–59S. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bahrick L. E., & Lickliter R. (2000). Intersensory redundancy guides attentional selectivity and perceptual learning in infancy. Developmental Psychology, 36(2), 190–201. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bergeson T. R., Miller R. J., & McCune K. (2006). Mothers' speech to hearing-impaired infants and children with cochlear implants. Infancy, 10(3), 221–240. [Google Scholar]
- Boersma P., & Weenink D. (2005). Praat: Doing phonetics by computer. Retrieved from http://www.fon.hum.uva.nl/praat/
- Bornstein M. H., Hahn C.-S., & Haynes O. M. (2004). Specific and general language performance across early childhood: Stability and gender considerations. First Language, 24(3), 267–304. [Google Scholar]
- Bortfeld H., Morgan J. L., Golinkoff R. M., & Rathbun K. (2005). Mommy and me: Familiar names help launch babies into speech-stream segmentation. Psychological Science, 16(4), 298–304. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bremner A. J., Lewkowicz D. J., & Spence C. (2012). The multisensory approach to development. In Bremner A. J., Lewkowicz D. J., & Spence C. (Eds.), Multisensory development (pp. 1–26). Oxford, United Kingdom: Oxford University Press. [Google Scholar]
- Brugman H., & Russel A. (2004). Annotating multimedia/multi-modal resources with ELAN. Proceedings of LREC 2004, Fourth International Conference on Language Resources and Evaluation, Nijmegen, The Netherlands. https://tla.mpi.nl/tools/tla-tools/elan [Google Scholar]
- Chong S. C. F., Werker J. F., Russell J. A., & Carroll J. M. (2003). Three facial expressions mothers direct to their infants. Infant and Child Development, 12(3), 211–232. [Google Scholar]
- Depowski N., Abaya H., Oghalai J., & Bortfeld H. (2015). Modality use in joint attention between hearing parents and deaf children. Frontiers in Psychology, 6, 1556. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eisenberg L. S. (2007). Current state of knowledge: Speech recognition and production in children with hearing impairment. Ear and Hearing, 28(6), 766–772. [DOI] [PubMed] [Google Scholar]
- Fagan M. K., Bergeson T. R., & Morris K. J. (2014). Synchrony, complexity and directiveness in mothers' interactions with infants pre- and post-cochlear implantation. Infant Behavior and Development, 37, 249–257. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Feldman R., Eidelman A. I., Sirota L., & Weller A. (2002). Comparison of skin-to-skin (kangaroo) and traditional care: Parenting outcomes and preterm infant development. Pediatrics, 110(1), 16–26. [DOI] [PubMed] [Google Scholar]
- Feldman R., Rosenthal Z., & Eidelman A. I. (2014). Maternal-preterm skin-to-skin contact enhances child physiologic organization and cognitive control across the first 10 years of life. Biological Psychiatry, 75(1), 56–64. [DOI] [PubMed] [Google Scholar]
- Ferber S. G. (2004). The nature of touch in mothers experiencing maternity blues: The contribution of parity. Early Human Development, 79(1), 65–75. [DOI] [PubMed] [Google Scholar]
- Ferber S. G., Feldman R., & Makhoul I. R. (2008). The development of maternal touch across the first year of life. Early Human Development, 84(6), 363–370. [DOI] [PubMed] [Google Scholar]
- Field T. (2010). Touch for socioemotional and physical well-being: A review. Developmental Review, 30(4), 367–383. [Google Scholar]
- Franco F., Fogel A., Messinger D. S., & Frazier C. A. (1996). Cultural differences in physical contact between Hispanic and Anglo mother–infant dyads living in the United States. Early Development and Parenting, 5(3), 119–127. [Google Scholar]
- Gogate L. J., Bahrick L. E., & Watson J. D. (2000). A study of multimodal motherese: The role of temporal synchrony between verbal labels and gestures. Child Development, 71(4), 878–894. [DOI] [PubMed] [Google Scholar]
- Gogate L. J., Maganti M., & Bahrick L. E. (2015). Cross-cultural evidence for multimodal motherese: Asian Indian mothers' adaptive use of synchronous words and gestures. Journal of Experimental Child Psychology, 129, 110–126. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gogate L. J., Prince C. G., & Matatyaho D. J. (2009). Two-month-old infants’ sensitivity to changes in arbitrary syllable-object pairings: The role of temporal synchrony. Journal of Experimental Psychology: Human Perception and Performance, 35(2), 508–519. [DOI] [PubMed] [Google Scholar]
- Goldin-Meadow S., & Saltzman J. (2000). The cultural bounds of maternal accommodation: How Chinese and American mothers communicate with deaf and hearing children. Psychological Science, 11(4), 307–314. [DOI] [PubMed] [Google Scholar]
- Gottlieb G. (1971). Ontogenesis of sensory function in birds and mammals. In Tobach E., Aronson L. R., & Shaw E. (Eds.), The biopsychology of development (pp. 67–128). New York, NY: Academic Press. [Google Scholar]
- Harris M., & Chasin J. (2005). Visual attention in deaf and hearing infants: The role of auditory cues. The Journal of Child Psychology and Psychiatry, 46(10), 1116–1123. [DOI] [PubMed] [Google Scholar]
- Jean A. D. L., & Stack D. M. (2012). Full-term and very-low-birth-weight preterm infants' self-regulating behaviors during a still-face interaction: Influences of maternal touch. Infant Behavior and Development, 35(4), 779–791. [DOI] [PubMed] [Google Scholar]
- Jean A. D. L., Stack D. M., & Fogel A. (2009). A longitudinal investigation of maternal touching across the first 6 months of life: Age and context effects. Infant Behavior and Development, 32(3), 344–349. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Johnson E. K. (2012). Bootstrapping language: Are infant statisticians up to the job? In Rebuschat P. & Williams J. N. (Eds.), Statistical learning and language acquisition (pp. 55–89). Berlin, Germany: Walter de Gruyter. [Google Scholar]
- Kirk K. I., Miyamoto R. T., Ying E. A., Perdew A. E., & Zuganelis H. (2000). Cochlear implantation in young children: Effects of age at implantation and communication mode. The Volta Review, 102(4), 127–144. [Google Scholar]
- Koester L. S., Books L., & Karkowski A. (1998). A comparison of the vocal patterns of deaf and hearing mother-infant dyads during face-to-face interactions. Journal of Deaf Studies and Deaf Education, 3(4), 290–301. [DOI] [PubMed] [Google Scholar]
- Koester L. S., Brooks L., & Traci M. A. (2000). Tactile contact by deaf and hearing mothers during face-to-face interactions with their infants. Journal of Deaf Studies and Deaf Education, 5(2), 127–139. [DOI] [PubMed] [Google Scholar]
- Koester L. S., & Lahti-Harper E. (2010). Mother–infant hearing status and intuitive parenting behaviors during the first 18 months. American Annals of the Deaf, 155(1), 5–18. [DOI] [PubMed] [Google Scholar]
- Kondaurova M. V., & Bergeson T. R. (2011). The effects of age and infant hearing status on maternal use of prosodic cues for clause boundaries in speech. Journal of Speech, Language, and Hearing Research, 54(3), 740–754. [DOI] [PubMed] [Google Scholar]
- Kondaurova M. V., Bergeson T. R., & Dilley L. C. (2012). Effects of deafness on acoustic characteristics of American English tense/lax vowels in maternal speech to infants. The Journal of the Acoustical Society of America, 132(2), 1039–1049. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kondaurova M. V., Bergeson T. R., & Xu H. (2013). Age-related changes in prosodic features of maternal speech to prelingually deaf infants with cochlear implants. Infancy, 18(5), 825–848. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lam C., & Kitamura C. (2010). Maternal interactions with a hearing and hearing-impaired twin: Similarities and differences in speech input, interaction quality, and word production. Journal of Speech, Language, and Hearing Research, 53, 543–555. [DOI] [PubMed] [Google Scholar]
- Lederberg A. R., & Everhart V. S. (1998). Communication between deaf children and their hearing mothers: The role of language, gesture, and vocalizations. Journal of Speech, Language, and Hearing Research, 41(4), 887–899. [DOI] [PubMed] [Google Scholar]
- Lewkowicz D. J. (1996). Perception of auditory–visual temporal synchrony in human infants. Journal of Experimental Psychology: Human Perception and Performance, 22(5), 1094–1106. [DOI] [PubMed] [Google Scholar]
- Lewkowicz D. J. (2010). Infant perception of audio-visual speech synchrony. Developmental Psychology, 46(1), 66–77. [DOI] [PubMed] [Google Scholar]
- Loots G., & Devisé I. (2003). The use of visual–tactile communication strategies by deaf and hearing fathers and mothers of deaf infants. Journal of Deaf Studies and Deaf Education, 8(1), 31–42. [DOI] [PubMed] [Google Scholar]
- Lund E., & Schuele C. M. (2015). Synchrony of maternal auditory and visual cues about unknown words to children with and without cochlear implants. Ear and Hearing, 36(2), 229–238. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mattys S. L., White L., & Melhorn J. F. (2005). Integration of multiple speech segmentation cues: A hierarchical framework. Journal of Experimental Psychology: General, 134(4), 477–500. [DOI] [PubMed] [Google Scholar]
- Meadow-Orlans K. P. (1997). Effects of mother and infant hearing status on interactions at twelve and eighteen months. Journal of Deaf Studies and Deaf Education, 2(1), 26–36. [DOI] [PubMed] [Google Scholar]
- Meadow-Orlans K. P., & Spencer P. E. (1996). Maternal sensitivity and the visual attentiveness of children who are deaf. Early Development and Parenting, 5(4), 213–223. [Google Scholar]
- Mitchell R. E., & Karchmer M. A. (2004). Chasing the mythical ten percent: Parental hearing status of deaf and hard of hearing students in the United States. Sign Language Studies, 4(2), 138–163. [Google Scholar]
- Moeller M. P., & Tomblin J. B. (2015). An introduction to the outcomes of children with hearing loss study. Ear and Hearing, 36(1), 4S–13S. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nomikou I., Koke M., & Rohlfing K. (2017). Verbs in mothers' input to six-month-olds: Synchrony between presentation, meaning, and actions is related to later verb acquisition. Brain Sciences, 7(5), 52. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nomikou I., & Rohlfing K. J. (2011). Language does something: Body action and language in maternal input to three-month-olds. IEEE Transactions on Autonomous Mental Development, 3(2), 113–128. [Google Scholar]
- O'Neill M., Bard K. A., Linnell M., & Fluck M. (2005). Maternal gestures with 20-month-old infants in two contexts. Developmental Science, 8(4), 352–359. [DOI] [PubMed] [Google Scholar]
- Paradis G., & Koester L. S. (2015). Emotional availability and touch in deaf and hearing dyads. American Annals of the Deaf, 160(3), 303–315. [DOI] [PubMed] [Google Scholar]
- Pelucchi B., Hay J. F., & Saffran J. R. (2009). Statistical learning in a natural language by 8-month-old infants. Child Development, 80(3), 674–685. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Prendergast S. G., & McCollum J. A. (1996). Let's talk: The effect of maternal hearing status on interactions with toddlers who are deaf. American Annals of the Deaf, 141(1), 11–18. [DOI] [PubMed] [Google Scholar]
- Seidl A., & Johnson E. K. (2006). Infant word segmentation revisited: Edge alignment facilitates target extraction. Developmental Science, 9(6), 565–573. [DOI] [PubMed] [Google Scholar]
- Seidl A., Tincoff R., Baker C., & Cristia A. (2015). Why the body comes first: Effects of experimenter touch on infants' word finding. Developmental Science, 18(1), 155–164. [DOI] [PubMed] [Google Scholar]
- Smith N. A., & McMurray B. (2018). Temporal responsiveness in mother–child dialogue: A longitudinal analysis of children with normal hearing and hearing loss. Infancy, 1–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Spencer P. E., Bodner-Johnson B. A., & Gutfreund M. K. (1992). Interacting with infants with a hearing loss: What can we learn from mothers who are deaf? Journal of Early Intervention, 16(1), 64–78. [Google Scholar]
- Tamis-LeMonda C. S., Kuchirko Y., & Tafuro L. (2013). From action to interaction: Infant object exploration and mothers' contingent responsiveness. IEEE Transactions on Autonomous Mental Development, 5(3), 202–209. [Google Scholar]
- Vandam M., Ambrose S. E., & Moeller M. P. (2012). Quantity of parental language in the home environments of hard-of-hearing 2-year-olds. Journal of Deaf Studies and Deaf Education, 17(4), 402–420. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vanormelingen L., De Maeyer S., & Gillis S. (2016). A comparison of maternal and child language in normally-hearing and hearing-impaired children with cochlear implants. Language, Interaction and Acquisition, 7(2), 145–179. [Google Scholar]
- Waxman R. P., & Spencer P. E. (1997). What mothers do to support infant visual attention: Sensitivities to age and hearing status. Journal of Deaf Studies and Deaf Education, 2(2), 104–114. [DOI] [PubMed] [Google Scholar]




