Abstract
In the current study, we examine how hearing parents use multimodal cuing to establish joint attention with their hearing (N=9) or deaf (N=9) children during a free-play session. The deaf children were all candidates for cochlear implantation who had not yet been implanted, and each hearing child was age-matched to a deaf child. We coded parents’ use of auditory, visual, and tactile cues, alone and in different combinations, during both successful and failed bids for children’s attention. Although our findings revealed no clear quantitative differences in parents’ use of multimodal cues as a function of child hearing status, secondary analyses revealed that hearing parents of deaf children used shorter utterances while initiating joint attention than did hearing parents of hearing children. Hearing parents of deaf children also touched their children twice as often throughout the play session than did hearing parents of hearing children. These findings demonstrate that parents differentially accommodate the specific needs of their hearing and deaf children in subtle ways to establish communicative intent.
Introduction
Parent-child dyads in which the parent is hearing and the child is deaf present a unique opportunity to examine whether and how people spontaneously adjust interactions depending on the unique needs of their communicative partners. In prior research, we examined how parents of deaf children adjusted their dyadic interactions during episodes of joint attention. Findings from that work showed that hearing parents accommodate their deaf children’s hearing status by using multimodal cues throughout episodes of joint attention more often than hearing parents of hearing children (Depowski et al., 2015). In spite of this, findings from the same study showed that hearing parents of hearing children spent more time overall in joint attention with their children than hearing parents of deaf children. In other words, we found that hearing parents make communicative accommodations for their deaf children when they are engaged in joint attention with them, but that they are engaged in joint attention with their deaf children less often than hearing parents of hearing children. This raises the question: how do hearing parents of deaf children initiate joint attention in the first place? The goal of the present study is to characterize how hearing parents establish joint attention with their deaf children and to compare it to how joint attention is established by hearing parents of hearing children.
Joint Attention in Hearing-Status Matched Dyads
Joint attention refers to the shared focus of two people on an object. It is achieved when one person alerts the other to the object of interest. For dyads consisting of a young child and a parent, periods of joint attention have been shown to facilitate the development of a range of skills, including language. The relationship between joint attention and language development has motivated extensive research on how hearing parents support the development of joint attention skills in their hearing children (i.e., hearing parent-hearing child dyads). To the degree that researchers have examined the development of deaf children’s joint attention abilities (Lieberman et al., 2014), they too have focused on parent-child dyads who are matched for hearing-status (i.e., deaf parent-deaf child dyads). Shared hearing status in parent-child dyads influences language learning, as evidenced by findings that children in deaf parent-deaf child dyads develop joint attention at rates similar to their typically-hearing peers (Spencer, 2000). In contrast, children in hearing parent-deaf child dyads do not (Nowakowski et al., 2009).
Joint Attention in Hearing-Status Mismatched Dyads
Hearing-status mismatched dyads consist of conversational partners who do not both hear equally well. In terms of prevalence of hearing-status mismatches in parent-child dyads, 4.4% of children born to deaf parents are also deaf, meaning that over 95% percent of children born to deaf parents are themselves able to hear (Mitchell & Karchmer, 2004). Likewise, well over 95% of deaf children are born to hearing parents, who themselves have little to no experience of deafness (Mehra et al., 2009). In other words, to the degree there is a deaf member of a biological parent-child dyad, that dyad is more likely to be mismatched than matched on hearing status. Here we focus on hearing parent-deaf child dyads in which parents have decided to pursue cochlear implantation for their children with the goal of raising their children using spoken language.
Cochlear implants are an assistive technology that allows a person with sensorineural hearing loss to experience the sensation of sound via direct stimulation of the auditory nerve (Korver et al., 2017). Early implantation maximizes a deaf child’s access to speech during a period of heightened neural plasticity, which in turn should lead to more age-appropriate speech-language skills given sufficient spoken language input (Fitzpatrick et al., 2015). Thus, the earlier the child receives an implant, the better. Although the average age of pediatric cochlear implantation is steadily declining (see Colleti, 2013), with some children receiving implants as young as 6 months of age (Miyamoto et al., 2017), the average age of implantation is far older (Hoff et al., 2019).
In the sample of children included in the present study—all of whom were candidates for cochlear implantation and none of whom had yet received their implant—ages ranged from 11 to 39 months. Indeed, the average age of cochlear implantation varies substantially depending on a number of factors (see Fitzpatrick et al., 2015). For example, because cochlear implantation is an invasive medical procedure, parents must have a child’s hearing assessed by clinicians and the child must be approved for implantation by a medical team. Only at that point can surgery be scheduled and the child implanted. Following implantation, the child must recover, at which point the child’s implant can be activated and programmed (Chen & Oghalai, 2016). Finally, it takes time post-implantation for the child to recognize structure in the auditory signal that the device provides and to learn from it (Bortfeld, 2019). There are many additional sources of delay that are beyond the scope of this overview to describe in detail. Suffice it to say that such delays may put pediatric cochlear implantees at a developmental disadvantage relative to children from hearing-status matched dyads, particularly insofar as timely exposure to fluent and structured linguistic input is concerned (see Hall et al., 2017, 2018a, 2018b).
Particularly in cases where there is little to no sign language being used in a hearing-status mismatched parent-child dyad, as is often the case with children who are candidates for cochlear implantation, the establishment of joint attention can serve as an important scaffold for children to learn about communicative intent, as is the case for children in hearing-status matched dyads. Thus far, researchers have focused on hearing-mismatched parent-child dyads to identify strategies used by parents to engage children’s attention (Gale & Schick, 2009; Lieberman et al., 2014), characterize parents’ adaptive social behaviors (Nowakowski et al., 2009), and compare overall amounts of joint attention across dyad types (i.e., hearing parent-deaf child; hearing parent-hearing child; deaf parent-deaf child) (Spencer et al., 1992; Spencer et al., 2004; Nowakowski et al., 2009). Although these studies generally include small sample sizes and children with highly heterogeneous hearing issues, their findings show that, while hearing parents are sensitive to deaf children’s communicative efforts, the overall rate of maternally-initiated joint attention is lower in hearing status-mismatched dyads. Moreover—and critical to our purposes here—deaf children in those studies were not candidates for cochlear implantation, nor did the researchers focus specifically on the manner in which parents engaged the children in joint attention. Therefore, in the current study we characterize the parental behaviors that lead to joint attention in hearing parent-deaf child dyads in which the child is a candidate for cochlear implantation, comparing those to behaviors observed in hearing parent-hearing child dyads in which each child is age-matched to a deaf child. In particular, we focus on the individual sensory cues or combination of cues that parents from both dyad types use in successful and failed attempts to establish joint attention with their children.
Multimodal Cueing in Hearing Parent-Deaf Child Dyads
Multimodal cueing serves an important role in parent-child communication. For example, Bahrick and colleagues helped characterize how infants’ attentional biases contribute to their language development by demonstrating that infants respond to cues across the full range of sensory modalities when those cues are synchronous with one another (Bahrick & Lickliter, 2000; Bahrick, 2006). In particular, infants experience this intersensory redundancy in the form of multimodal cues when they visually observe and hear their parents speaking to them. Such cues have been shown to aid infants in learning abstract rules about language (Frank et al., 2009), and parental talk and touch within episodes of mutual engagement supports development of children’s sustained attention (Suarez-Rivera et al., 2019). Critically, whether parents provide such combinations of cues when they initiate joint attention in the first place is largely unknown. In previous research, we documented hearing parents’ use of converging, multimodal cues when interacting with their deaf children who were candidates for cochlear implantation (Depowski et al., 2015). Specifically, we observed that hearing parents made modifications to the input they provided to deaf children during episodes of joint attention, using a greater range of multimodal cues than hearing parents did with hearing children, albeit with considerable variability across different hearing parent-deaf child dyads (Depowski et al., 2015). In other words, hearing parents of deaf children accommodated their children’s unique communicative needs once the dyad was engaged in joint attention towards an object. But how these parents initiated joint engagement in the first place remains unclear. If hearing parents are making these important modifications within instances of joint attention with their deaf children, how does the dyad become jointly engaged?
Study 1A
The goal of the current study was to build on our team’s previous findings (Depowski et al., 2015; Gabouer et al., 2018; Bortfeld & Oghalai, 2018) by examining what cues and combinations of cues hearing parents use to initiate joint attention with their deaf children and to compare that to the cues used by hearing parents of hearing children. Parental attempts to establish joint attention first were classified as either successful or failed, and then were coded for the modality or combination of modalities used. We predicted that hearing parents of deaf children would engage in fewer instances of joint attention relative to hearing parents of hearing children. We also predicted that parents would combine more cues when attempting to engage in joint attention with deaf children than with hearing children.
Materials and Methods
Participants
Participants were nine severely to profoundly deaf children (females = 3) aged 22 months (M = 22.2, SD = 9.4) and their hearing parents (females = 9) and nine typically developing children (females = 5) aged 24 months (M = 24.2, SD = 11.3) and their hearing parents (females = 5). Each hearing child was matched as closely as possible based on age to a deaf child. Each family was recruited using the National Institute of Health website or via local recruitment at the respective research sites (i.e., at Stanford University or at the University of Connecticut). All parents had at least some college education. Race and ethnicity information is presented in Table 1. All deaf children included in this study were candidates for a cochlear implant, but none had yet been implanted. All children were receiving at least one hour and no more than 3 hours of speech therapy each week, with only a subset reporting some exposure to signed communication in these sessions and/or at home (either a natural sign language, e.g., American Sign Language (ASL), or total or simultaneous communication, in which sign is used in conjunction with speech). No deaf children in this sample were receiving consistent, fluent language input in the visual modality. We observed little to no sign use in the free-play sessions, with only two of the nine parents of deaf children producing one or two simple (i.e., single word) signs in the course of the interaction. These signs were included in our coding. The study was carried out in accordance with recommendations from the Stanford University School of Medicine Institutional Review Board and the University of Connecticut Institutional Review Board with written informed consent from all participants in accordance with the Declaration of Helsinki. For young children, parents provided written informed consent.
Table 1.
Demographic information of current sample.
| Hearing-Deaf Dyads | Hearing-Hearing Dyads | % Total | |
|---|---|---|---|
| White | 2 | 8 | 56% |
| White-Hispanic | 6 | 1 | 39% |
| Asian | 1 | 0 | 5.6% |
| Totals | 9 | 9 | 100% |
Materials
Each parent-child dyad was invited to participate in a free-play session during a visit with their speech language pathologist at the Stanford University Hearing Clinic or during a visit to the Husky Pub Language Lab at the University of Connecticut. Matching sets of appropriate toys for the age range included here were made available during all free-play sessions (a ball, a set of large blocks, a set of stacking cups, tableware, a tower of stacking rings, and toy cars). Parents were instructed to play with their child as they would at home and the experimenter told parents she would return to the room after five minutes had elapsed. To ensure equal play session lengths, any extra time beyond the five minutes that followed the experimenter closing the playroom door—never more than 30 seconds of additional play time— was excised from each video. Videos of the hearing parent-deaf child dyads were then transmitted by collaborators at Stanford University to researchers at University of Connecticut using Research Electronic Data Capture (REDCap) electronic data capture tools hosted at Stanford University (Harris et al., 2009). REDCap is a secure, web-based application designed to support data capture for research studies. It provided the two labs with a vehicle for validated data entry with audit trails for tracking data entry and export, as well as procedures for importing data from external sources. For the current study, REDCap was used solely as a means of secure video transfer between collaborators and was not used for any analytical/coding purposes.
Procedure
The videos were coded for initial instances of parent-initiated joint attention using ELAN (Wittenburg et al., 2006), a custom language annotation software created by the Max Planck Institute for Psycholinguistics (The Language Archive, Nijmegen, The Netherlands). ELAN (http://tla.mpi.nl/tools/tla-tools/elan/) allows for multimodal analyses of language and other behaviors and is available free of charge. We used modified coding criteria for joint attention based on the work of Tomasello and Farrar (1986), described below. Coded variables were analyzed using ELAN, Microsoft Excel, and a statistical software package.
Video Processing
Videos were reviewed for visual clarity, and Adobe Premiere Pro (CS6) was used to edit videos for the start and end time of each play session. The start time of the play session was defined as the first frame in which the testing room door was closed, leaving the parent and child alone. The end of each play session was defined as the first frame in which the experimenter opened the door to end the play session. These two values were subtracted to give a baseline length of time for the play session, to ensure that each was 5 minutes in length.
Joint Attention Coding
In the present study, parent-initiated bids for joint attention—both successful and failed—were coded and quantified. Here, we use the term “bid” to describe a purposeful action on the part of the parent with the intent of directing his or her child’s attention to an object of interest. A successful bid for joint attention consisted of three criteria: a parent’s bid for the child’s attention, gaze switching by the parent between the object and the child, and a response from the child that lasted for at least 3 seconds and demonstrated the child was aware of the interaction (see Figure 1). A successful bid also could occur if the parent shifted the child’s attention from one object to another using one of the mentioned techniques. The child could respond to a bid by using pointing, gaze following, tapping or touching the parent, touching the object of interest, deliberate waving within the parent’s visual field, changing affect, and/or producing language. The child was required to engage with the parent as indicated by one or more of these responses for three seconds or more (Bakeman & Adamson, 1984) for the bid to be considered successful. If the parent attempted to initiate interaction with the child, and the child did not respond within three seconds of the parent’s bid, the instance was coded as a failed bid (Figure 1). Additionally, if the parent did not engage in gaze switching behavior (i.e., looking from the object back to the child) the instance was not considered a bid and was not coded (Figure 1).
Figure 1.
Decision tree used for joint attention coding.
To identify bids for joint attention in ELAN, the onset of a parent’s behavior (e.g., reaching, showing) was marked as the onset of the bid. The end of this period was marked when the parent completed one gaze shift between the infant and the object. If the child made no gaze shift, the end of the period was marked at the end of the three second time-window of opportunity for the child to respond (Figure 2, Seconds 0–3), and the instance was coded as a failed bid. Our criteria for what constituted initiation of a bid for joint attention were conservative: the parent had to demonstrate a look to the toy and a look back to the child to ensure the child was engaged. Additionally, a five second rule of engagement was used: if the child disengaged and re-engaged within this five second window (Figure 2, Step 3a), the episode continued and no new initiation could be coded. Similarly, there was a five second rule of disengagement (Figure 2, Step 3b): a joint attention episode was terminated if the child no longer engaged with the object/parent for five seconds.
Figure 2.
Timeline of joint attention in seconds. Seconds 0–3 indicate the time after the parent made an initial bid for joint attention (1). After the onset of a bid, children three seconds to respond to the bid for it to be classified as a successful bid. If the child did not respond, it was classified as a failed bid. During this 3 second window, any modality cues used by the parents were coded, up until the point at which the child responded, or if they did not respond, at the end of the 3 second window (2).
Modality Coding
All parent-initiated joint attention bids were then coded separately based on parental use of the following modalities: auditory, visual, tactile, auditory-visual, auditory-tactile, visual-tactile, and auditory-visual-tactile. Only instances in which attempts were made to initiate joint attention were included in the coding presented here (Figure 2, Seconds 0–3, indicated by red circle). Actions that occurred within episodes of joint attention (Figure 2, Seconds 3–7) were not coded, as here we only were interested in the precursors to joint attention. Only those actions made by a parent within the time window between bid initiation and child response (Figure 2, circled region) were considered in our coding of bid modalities.
Auditory.
The auditory modality included using sound to gain the child’s attention. This include language, humming, other vocal sounds (e.g., ‘psst!’), making noise with a toy, and clapping outside of the child’s visual field.
Visual.
The visual modality involved the parent moving a hand or an object into the child’s visual field to get the child’s attention. This included behaviors such as waving, gesturing, reaching, pointing, offering a toy, holding an object in the child visual field, or changing affect.
Tactile.
The tactile modality involved interactions initiated via touch, direct or indirect. This included tapping or touching the child, tickling, hugging, touching with a toy, or physically moving the child to direct their attention.
Auditory-visual.
The auditory-visual modality was a multimodal cue that included the parent using both an auditory and a visual behavior to gain the child’s attention. This included, but was not limited to, gesturing while talking, presenting a toy while describing it, responding to a visual event, or changing affect while producing a sound.
Auditory-tactile.
The auditory-tactile modality was a multimodal cue that involved the parent mixing an auditory and a tactile cue. This included touching the child with a toy while describing it or making the accompanying noise (e.g., touching the child with a toy and describing the feature).
Visual-tactile.
The visual-tactile modality was a multimodal cue that involved the parent using both a visual and tactile cue. This included directing the child’s attention to a toy not currently within the visual field by physically moving the child (e.g., while the child was sitting in the parent’s lap, the parent turns the child to guide him or her to look at new toys).
Auditory-visual-tactile.
The auditory-visual-tactile modality was a multimodal cue that included the use of sounds, visual information, and touch in an effort to gain the child’s attention (e.g., the parent showed the child the toy, while labeling the toy, and tickling the child).
Data Analysis
Inter-rater Reliability
Approximately 25% of the sample was dual coded for reliability. Reliability was calculated by examining the percent overlap in the quantity of successful and failed bids, as well as by comparing the modality code attributed to each instance of joint attention. Cohen’s κ (McHugh, 2012) was calculated to determine the extent of agreement between the annotations of the first and second coder regarding the identification of successful and failed bids. Results showed there was substantial agreement between coders in judging which were successful bids, κ = .63, 95% CI [0.557, 0.703], and near perfect agreement between the coders in identifying failed bids, κ = .85, 95% CI [0.777, 0.923]. The average overlap/extent ratio between coders’ modality identification was 76% for the successful bids and 80% for the failed bids. All disagreements between the two coders were resolved through discussion between the coders and the first author.
Joint Attention Modalities
Seven modality metrics were computed for both successful and failed bids to initiate joint attention. This was done by extracting the total number of occurrences of each modality cue used during an attempt to gain a child’s attention, whether it was a successful or a failed bid. Proportional data were then calculated by comparing the raw number of modality-specific bid types to the overall number of bids throughout the interaction (e.g., number of auditory-visual bids relative to the total number of parental bids). These data were compared as a function of child hearing status. Mann-Whitney U analyses were used to compare the proportion and raw totals of modality use across hearing parent-hearing child and hearing parent-deaf child dyads. In contrast to a t-test, this non-parametric test is used when the sample is small and the distribution of the data is unknown or not normally distributed. Thus, it is more robust against outliers and heavy tail distributions (i.e., non-normal distributions) as in these data.
Results
There were no instances of tactile-only modality use in either the successful or failed bids for joint attention, and there were no instances of auditory-tactile or visual-tactile combinations in failed bids. Therefore, these modalities were excluded from further analysis. Table 2 shows the overall number of occurrences of each modality by bid type and dyad hearing status.
Table 2.
Raw Frequency of Occurrence for Each Modality by Joint Attention Bid Type and Dyad Hearing Status
| Modality | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Auditory | Visual | Tactile | Auditory-Visual | Auditory-Tactile | Visual-Tactile | Auditory-Visual-Tactile | ||||||||
| Hearing Status | HH | HD | HH | HD | HH | HD | HH | HD | HH | HD | HH | HD | HH | HD |
| Bid Type |
||||||||||||||
| Successful | 9 | 3 | 7 | 8 | 0 | 0 | 28 | 23 | 1 | 0 | 0 | 1 | 2 | 4 |
| Failed | 2 | 3 | 4 | 2 | 0 | 0 | 10 | 12 | 0 | 0 | 0 | 0 | 0 | 4 |
Proportion of Bids
Proportions were calculated by comparing the raw number of successful and failed bids to the overall number of bids throughout the interaction. We first used a proportion analysis to determine whether there were differences in overall proportions of bids by dyad type (i.e., hearing parents of hearing children may just bid for attention more than hearing parents of deaf children, and therefore would have more chances for success). For the initial analysis, we compared the proportion of success and failure rates of hearing parents of hearing children and hearing parents of deaf children at engaging their children in joint attention. There were no significant differences by dyad type (hearing parent-hearing child vs. hearing parent-deaf child) in proportion of successful bids for joint attention (U = 51.5, p = 0.35), nor of failed bids (U = 29.5, p = 0.35). Contrary to our predictions, neither group of parents was better or worse at initiating joint attention overall. Because there were similar raw numbers of bids across both dyad types that were classified as successful and failed, for the remaining analyses we use frequency of occurrence as the basis for our comparisons.
Successful Parent-Initiated Joint Attention
A Mann-Whitney test indicated that there was no significant difference in the number of successful parental bids for joint attention using the unimodal auditory (U = 49.5, p = 0.45) or the unimodal visual (U = 39, p = 0.93) cues by dyad type. Additionally, there were no significant differences by dyad type in the number of successful bids for joint attention initiated via multimodal cues: auditory-visual (U = 48.5, p = 0.51), auditory-tactile (U = 45, p = 0.73), visual-tactile (U = 45, p = 0.73), or auditory-visual-tactile (U = 35, p = 0.66).
Failed Parent-Initiated Joint Attention
A Mann-Whitney test indicated that there was no significant difference in the number of failed attempts to initiate joint attention using unimodal auditory (U = 31.5, p = 0.45) or visual (U = 41.5, p = 0.97) cues by dyad type. Additionally, there was no significant difference in number of failed bids for joint attention using either auditory-visual cues (U = 37.5, p = 0.83) or auditory-visual-tactile cues (U = 27, p = 0.25).
Discussion
The purpose of the present study was to build on our previous research (Depowski et al., 2015) and characterize patterns of modality use in parent-initiated bids for joint attention in hearing parent-deaf child dyads and compare them to patterns in hearing parent-hearing child dyads. Our goal was to detail whether and how hearing parents of both deaf and hearing children use unimodal and multimodal cues in their attempts to direct their children’s attention. To that end, we developed a microcoding technique that focused on three critical features of joint attention, while also characterizing the patterns of multimodal cues that parents used to direct their children’s attention to an object of mutual interest. We predicted that hearing parent-deaf child dyads would engage in fewer instances of joint attention relative to those in hearing parent-hearing child dyads. Moreover, we expected to observe hearing parents use a range of modalities when attempting to engage in joint attention with their deaf children, and to do so more than hearing parents of hearing children. Neither of these predictions was supported by our findings. We next address the basis for these predictions and potential sources of variability in the data relative to previous work.
Joint Attention Coding
Joint attention is often quantified using structured assessment procedures that incorporate specific activities to elicit targeted behavior. For example, two structured measures frequently used in clinical domains are the Early Social Communication Scales (ESCS: Mundy et al., 2003; Seibert et al., 1982) and the Communication and Symbolic Behavior Scales (CSBS–DP: Wetherby & Prizant, 2002). The ESCS was designed to measure joint attention and related behaviors in typically developing toddlers (Morales et al., 2000; Mundy & Gomes, 1998), and the CSBS was developed to evaluate verbal and non-verbal communication in children at risk for communication and language impairments. These standardized measures emphasize gaze, point following, and point production. However, as should be apparent from the data reported here, a variety of other manners of communication can be documented during interactions with children from both typical and atypical populations. This is particularly relevant to deaf children of hearing parents who are candidates for cochlear implantation who do not know sign language. Detailed examination of the communicative attempts that take place in more naturalistic interactions, as reported here, should reveal more nuanced information about what works to support communication—and what does not—than the more structured scenarios used in clinical research (see Roos et al., 2008 for such an approach in children with Autism Spectrum Disorder).
While prior research is consistent in showing that hearing parents of deaf children accommodate their deaf children during interactions (Depowksi et al., 2015; Lieberman, Hatrack, & Mayberry, 2014), findings from the present study indicate that hearing parents use the same strategies to initiate joint attention with their hearing children. In the present study, we focused on the moments that led up to successful joint attention during free play, as well as on what happened prior to failed attempts by parents to establish joint attention with their children. Surprisingly, we observed no difference in outcome (successful or failed) of bids for attention by dyad type. This is in contrast to research focusing on parent and child behaviors within episodes of joint attention. Why is this? The coding scheme employed in the present study is one that we have carefully constructed based on extensive prior research by others (Tomasello & Farrar, 1986; Nowakowski et al., 2009), and arguably captures what researchers intended when joint attention was initially documented and classified (e.g., Bakeman & Adamson, 1984). An issue for consideration in future research is whether episodes of joint attention are being coded consistently across different labs. Given other notable differences between the present study and earlier studies that included deaf children, in particular that the deaf children included here were all candidates for cochlear implantation while the previous studies included parent-child dyads who used a range of communication techniques (i.e, formal sign language; auditory-verbal; oral only; auditory-verbal plus oral; total communication), it is clear that more research is needed to answer this and other questions. For now, we further interrogate the nature of the interactions we observed in the present study.
Study 1B
Our failure to find a difference in parental multimodal cue use during instigation of joint attention in Study 1A could be due to a number of factors, including our small sample size, the small sample size of the previous studies serving as our basis for comparison, the strict coding criteria we employed to identify episodes of joint attention, and the unique aspects of our particular sample of deaf children (i.e., deaf children who were candidates for cochlear implantation and who were not regularly exposed to sign language). To better characterize the interactions we observed in the present study across hearing parent-deaf child and hearing parent-hearing child dyads, we next conducted analyses with a focus on two additional aspects of parental input: 1) parental speech production during initiation of joint attention, and 2) parental use of touch more generally (i.e., throughout the free-play session). We next detail our rationale for examining these aspects of parental input.
Hearing parents typically rely on spoken language (produced in the auditory modality) to communicate with their hearing children. Recent research has shown that the hearing parents of deaf children who have received cochlear implants provide comparable amounts of spoken language input to their child as do hearing parents of hearing children (Vanormelingen et al., 2016). Of course, the point of an implant is to help deaf children hear and thus learn spoken language, so this is not entirely surprising. However, it does suggest that parents who already communicate primarily in the auditory modality prior to having a deaf child may not change the nature of their input to their children if they decide to pursue cochlear implantation once they do have a deaf child. Remarkably, there are no evidence-based guidelines available to inform hearing parents how to interact with their deaf children. We will address this further in the general discussion.
Another way that hearing parents can engage their profoundly deaf children is through touch. Whether hearing parents use spoken language with their children or not, hearing parents of profoundly deaf children must rely on some non-auditory sensory cue or cues to capture their children’s attention and engage them socially, and touch is one cue that parents of deaf children can rely on. Indeed, it is a sensory modality available to children even prenatally (Marx & Nagy, 2017). Tactile cues are an effective means of establishing social contact when audition is not available. Previous research has found that in hearing-status mismatched parent-child dyads, hearing mothers of deaf children use both tactile and visual information to communicate with their deaf children, and they do so more than mothers in hearing parent-hearing child dyads (Waxman & Spencer, 1997). Thus, touch is a way for parents to engage with their children when spoken language is not an option. Whether or not touch is a factor that helps distinguish interactions of the two dyad types included here likewise merits investigation.
Materials and Methods
The videos coded for joint attention in Study 1a were re-coded for our secondary analyses. Again, we used ELAN to annotate 1) the parental utterances during the bids for joint attention and 2) the overall instances of touch by parents. The coding criteria and inter-rater reliability for each of the newly coded variables of interest are presented below.
Mean Length of Utterances in Bids for Joint Attention
We examined the use of auditory language cues in parents’ bids to initiate joint attention by calculating the mean length of utterance (MLU) based on Brown’s (1973) protocol for determining the number of morphemes in an utterance. Utterances were identified in ELAN by transcribing any spoken language from the parent during an attempt to initiate joint attention. Then, utterances were quantified and coded for the number of morphemes contained in each. Here, an utterance is defined as the natural way in which speech is broken up by phrases. Commonly, an utterance is speech that is bounded by silence. Each coded utterance could be a word (e.g., “Look!”), a group of words, or a complete sentence. In our sample, each bid containing parent language was counted as one utterance, based on the coding criteria and the definition of an utterance.
The overall parental MLU was calculated across all parental bids for joint attention, including both successful and failed bids, and then separately coded for whether the utterance was part of a successful or failed bid. Instances in which a parent used an auditory cue that resulted from an object noise (e.g., shaking a toy, tapping the floor) were excluded from these analyses. We hypothesized that parental MLU during bids to initiate joint attention would be significantly lower in hearing parents of deaf children.
Parental Use of Touch
In our secondary analysis comparing parents’ use of touch overall across dyad types, we coded episodes of parental touch across the entirety of each play session (e.g., not immediately prior to or within episodes of joint attention). This included identifying instances in which any type of touch was used by the parent, either touch of the child directly (e.g., with a hand) or indirectly (e.g., with a toy), and regardless of the attentional states of either the parent or child.
In ELAN, touch was coded for duration, as well as raw number of instances. Coders identified any and all instances of intentional touch throughout the play session – from adjusting the child’s position or location in the room to tapping them atop their head with a soft toy. Instances of incidental contact, such as grazing, were not coded. Rather, we were interested in how parents used purposeful touch to interact with their child. The annotation started at the initial point of contact and persisted as long as the parent maintained contact. In the instance parents removed their hand or toy from the child for less than 1 second (e.g., tapping on the child), this was coded as one continuous touch. This code ended when the parent was no longer in contact with the child for longer than 1 second. Here, we hypothesized that hearing parents of deaf children would use touch more often in their interactions.
Inter-Rater Reliability.
Again, about 25% of the MLU and touch codes were randomly selected for reliability coding. Reliability for the count data was calculated using Krippendorff’s Alpha-Reliability (Krippendorff, 2011). Krippendorff’s alpha (α) is a reliability coefficient developed to measure the agreement between observers drawing distinctions among typically unstructured phenomena in the form of nominal data and is suited for small samples sizes. The reliability ratings for the MLU analysis were done for the number of total utterances, as well as the length of each utterance. In terms of total number of utterances by a parent across all bid types, the agreement between coders across 22 decisions was α = 0.788. The agreements between the 2 coders regarding the MLU per utterance was α = 0.763, across 56 decisions. We also calculated a reliability score for the touch data using Krippendorff’s alpha. The agreement between the 2 coders across 12 decisions was α = 0.766, where 1.0 is perfect agreement.
Results
Parental MLU
We examined whether the MLU used by parents in either dyad type was related to a successful or failed bid for joint attention. When comparing the MLU for successful parental bids for joint attention across dyad types, we found a significant difference in the length of utterances that hearing parents of hearing children used relative to the length of the utterances that hearing parents of deaf children used (MHH = 5.53 vs. MHD = 3.21; U = 10, p = 0.036). Interestingly, the MLU of utterances produced in failed bids for joint attention did not differ across dyad types. There was no difference (U = 46.5, p = 0.31) in the MLU of utterances produced by hearing parents of hearing children (MHH = 3.44) and hearing parents of deaf children (MHD = 2.53).
Parental Touch
We also compared to the amount of tactile contact exhibited by parents throughout the entirety of the play session. Although across dyad types, parents did not appear to use touch differentially to initiate joint attention with their children, a Mann-Whitney test showed that overall use of touch was greater for hearing parent-deaf child dyads (MHD = 4.2) than for hearing parent-hearing child dyads (MHH = 1.56; U = 17.5, p = 0.0466).
Discussion
In an effort to further characterize the nature of the interactions between the two dyad types included in this study, we ran two additional sets of analyses. The first focused on parental MLUs during bids for children’s attention, including any bids in which the auditory modality was used, either alone or in combination with other cues. In this case, we found a significant difference between dyad types, with hearing parents of hearing children producing more complex utterances—as indicated by MLU—than hearing parents of deaf children. Notably, this effect was carried by the successful bids for attention, with no such difference across dyad types emerging for failed bids. The lack of a significant difference in parental MLU during failed bids is intriguing. Perhaps overall, parents of deaf children produce shorter, less complex utterances overall in bids for joint attention, with some proving successful and others not. In contrast, when hearing-parents of hearing children produce such utterances, their children are less responsive and thus such bids are less likely to be successful.
Whether the adjustment is deliberate is unclear, although it is worth keeping in mind that these parents decided to pursue cochlear implantation for their deaf children and, for the most part, used spoken language (rather than sign) with them throughout the free-play session. The decision to pursue an implant may provide more motivation for these parents to speak as usual, in which case the difference we have observed here would be surprising, because they are not speaking comparably to the other parents in their successful bids. On the other hand, the parents of deaf children may speak as normally as they think is realistic for their deaf child to understand, and thus different relative to how they would speak to a hearing child. There is a lack of evidence-based information available to guide hearing parents on how to interact with their deaf children prior to implantation; thus, it is difficult to draw conclusions about whether the reduction in parental MLU we have observed was intentional or not.
Another notable observation from this analysis of parents’ MLUs was how two of the deaf children managed to respond to every spoken bid produced by their parents. To be clear, all of the children in this study failed newborn hearing screening and were characterized in their audiological profiles as severely to profoundly deaf. How then were these two children able to respond to their parents’ spoken bids for attention? Upon further investigation—and in support of the utility of multimodal communication—our analyses revealed that the speech to which these two children responded was consistently paired with a cue from another modality (most commonly a visual cue). In other words, the multimodal nature of parental interactions supported the deaf children’s ability to respond to their parents’ spoken bids for their attention, highlighting the importance of multimodal cue use to children’s understanding of communicative intent.
In addition to examining MLU in bids for joint attention, we quantified any tactile events that took place between parents and children throughout the play session (i.e., parental touching of children either directly or indirectly both in and outside of bids for joint attention). This additional analysis revealed a significant difference across the two dyad types in the amount of touch used by parents, with parents of deaf children touching their children significantly more often than those of hearing children. Differences in the use of tactile engagement has been observed in previous research (e.g., Spencer et al., 2004), although those findings were based on a very small sample and the children were not candidates for cochlear implantation. Thus, touch as a communicative device for use with this population of children merits further investigation.
General Discussion
The goal of the current study was to establish whether and how hearing parents of deaf children depart from patterns of behavior typically observed in hearing parents of hearing children to direct children’s attention. Rather than observing differences, we found that parents in both dyad types produced similar behaviors when establishing joint attention with their children. For example, we predicted that hearing parents of hearing children would most often rely on a unimodal bid to initiate joint attention. However, we found no difference in the average number of unimodal and multimodal bids used by parents across the two dyad types. Moreover, the vast majority of bids produced by both dyad types combined auditory with visual information.
Although there is limited research on the role of the parent in hearing parent-deaf child dyads in establishing joint attention, the research that has been conducted thus far has demonstrated that hearing parents behaviorally accommodate their children’s hearing status (Lieberman et al., 2011). However, this work did not compare successful and failed instances of joint attention, nor did it quantify the specific cues parents used to establish joint attention. The lack of clear differences in parental behavior across dyad types in the present study indicates that more work is needed to assess the role that hearing parents play in initiating joint attention with their deaf children, particularly when the deaf children are candidates for cochlear implantation. While hearing parents were more likely to have success when using the auditory modality alone with hearing children, hearing parents of deaf children likewise incorporated the auditory modality into their bids for children’s attention. Although the deaf children in these dyads did not have access to the auditory modality, their parents used that modality as often as parents of hearing children. On the other hand, our secondary analyses revealed that the nature of the spoken language produced during bids for joint attention did differ between dyad types. Hearing parents of hearing children produced more complex utterances—specifically during bids for attention that proved successful—relative to hearing parents of deaf children. If, indeed, the directive for parents who plan to have their children implanted is to speak to their children as they normally would, our data reveal that this was not happening in these free-play sessions.
Aside from producing more complex speech to hearing than to deaf children, our results suggest that parents from both dyad types used the auditory modality in conjunction with other modalities, supporting arguments that communication that takes place across multiple modalities more effectively elicits children’s attention. Indeed, both hearing and deaf parents have been shown to engage their children during play interactions in ways that guide children’s attention to objects as well as to the social world (Koester & Lahti-Harper, 2010), so called “intuitive parenting.” Given the present study’s findings that hearing parents often use the auditory modality with their deaf children despite the children having limited-to-no access to the auditory modality, this may be characterized as intuitive parenting as well (parent-interaction guidelines notwithstanding). Clearly, more research is needed on the role that the auditory modality plays in the establishment of joint attention between hearing parents and their deaf children, whether or not the children are candidates for cochlear implantation.
Applications for Intervention
Our findings point to the potential for therapeutic approaches that emphasize parental use of multimodal cues to establish and maintain joint attention with children, regardless of a particular child’s hearing status. Such an approach may facilitate language development in children more generally. For deaf or hard-of-hearing children who are candidates for cochlear implantation—particularly if the parent is opting to use spoken language without any accompanying sign language—such multimodal communication may be critical in providing a foundation for the child’s understanding of communicative intent. Likewise, greater focus on how parents use touch when establishing joint attention with their children may inform the structure of future therapeutic approaches. Given the evidence observed here and elsewhere (Akhtar & Gernsbacher, 2008) that joint attention can be established via non-auditory (e.g., tactile) means, it is not surprising that parents and their children—regardless of hearing status—use such means to communicate. Finally, although this is an exploratory study, the findings reported here highlight the utility of moving beyond standardized measures of joint attention to obtain rich, ecologically valid data on parent-child interactions for the assessment of development in both typically and atypically developing children.
Limitations and Future Directions
The findings from the present study are limited in a number of ways, not least by our small sample size. Previous studies (e.g., Lund & Schule, 2015) have found no differences in audio-visual input from hearing parents to children with cochlear implants and to age-matched children with normal hearing, only to have those differences emerge when the comparison group was matched for vocabulary size. Such a manipulation is one possibility for extending the research presented here. Other approaches are needed to fill the critical gaps that exist in information about the effectiveness of different manners of communicating with a deaf child who is a candidate for cochlear implantation. For example, assuming spoken language is the intended outcome, there is currently little evidence supporting the use of both sign and oral language in combination relative to oral language only for deaf children who are candidates for cochlear implantation. To be clear, while there is no evidence that adding sign language facilitates spoken language acquisition (cf. Hall et al., 2017, 2018a, 2018b), there is also no conclusive evidence that adding sign language interferes with spoken language development (Fitzpatrick et al., 2016). Needless to say, cohort studies of communication methods for the current generation of pediatric cochlear implantees—both pre- and post-implantation—are sorely needed.
Our approach establishes a means by which specific behaviors produced by participants in a dyad can be tracked over time. In particular, given substantial evidence of the association between joint attention and successful language development (see Morales et al., 1998), understanding how parents accommodate children’s unique communicative needs is an important exercise. If deaf children cannot access the auditory modality being used in their environment (i.e., spoken language), how can that child respond to bids for attention presented in that modality? In the current study, we did not differentiate among the effectiveness of different auditory cues used by parents (i.e., spoken language, noise made by a toy) in their bids for children’s attention. However, hearing parents of both hearing and deaf children used auditory information as a means of engaging in their children in joint attention, and thus it is an open question how deaf children experience this information. Perhaps, because the parent is frequently speaking to the child, the child uses the visual cues of a moving mouth to infer that there is something worth attending to in the environment. If so, parent vocalizations may indeed be a better cue for deaf children than an auditory cue provided by shaking a toy, for example. And how will such experiences pre-implantation will influence the child’s sensitivity to these attempts post-implantation? These are important issues that to address in future research.
Overall, the present study demonstrates the utility of detailed tracking of parents’ multimodal sensory input during interactions with their children as a factor in parent-child communicative success. Here we observed that parents of both deaf and hearing children converged in their use of multimodal cues in support of successful interactions with their children. However, we also observed important differences in the complexity of the speech that parents from the two dyad types used and the amount they touched their children throughout the interaction. While the current study did not provide evidence for the effectiveness of multimodal relative to unimodal cues for establishing joint attention in hearing-status mismatched dyads, it did highlight the degree to which parents—all parents—use combination of modalities in interactions with their children. Additional research that includes many more parent-child dyads will be needed to determine whether the various cues are more or less effective when used in isolation rather than in combination, regardless of child hearing status. Such findings will have implications for specific interventions for deaf and hard-of-hearing children who are candidates for cochlear implantation, as well as for language development and support of attentional allocation in all children.
Acknowledgements
We thank Eve Clark and Susan Brennan for helpful comments on this work, and the parents and children for their participation.
Funding
This work was supported by the National Institute on Deafness and Other Communication Disorders under Grant R01 DC010075, and by the Dana Foundation.
Footnotes
Declaration of Interests Statement
Declarations of interest: none.
References
- Akhtar N, & Gernsbacher M. (2008). On privileging the role of gaze in infant social cognition. Child Development Perspectives, 2, 59–65. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bahrick LE. (2006). Intermodal perception and selective attention to intersensory redundancy: Implications for typical social development and autism In The Wiley-Blackwell Handbook of Infant Development: Basic Research. [Google Scholar]
- Bahrick LE, & Lickliter R. (2000). Intersensory redundancy guides attentional selectivity and perceptual learning in infancy. Developmental Psychology, 36, 190. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bakeman R, & Adamson LB. (1984). Coordinating attention to people and objects in mother-infant and peer-infant interaction. Child Development, 1278–1289. [PubMed] [Google Scholar]
- Bortfeld H. & Oghalai J. (2018). Joint attention in hearing parent-deaf child and hearing parent-hearing child dyads. IEEE Transactions on Cognitive and Developmental Systems. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bortfeld H. (2019). Functional near‐infrared spectroscopy as a tool for assessing speech and spoken language processing in pediatric and adult cochlear implant users. Developmental Psychobiology, 61, 430–443. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brown R. (1973). A first language: The early stages. London: George Allen & Unwin. [Google Scholar]
- Chen MM, & Oghalai JS. (2016). Diagnosis and management of congenital sensorineural hearing loss. Current Treatment Options In Pediatrics, 2, 256–265 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Colletti L. (2009). Long-term follow-up of infants (4–11 months) fitted with cochlear implants. Acta Oto-Laryngologica, 129, 361–366. [DOI] [PubMed] [Google Scholar]
- Depowski N, Abaya H, Oghalai J, & Bortfeld H. (2015). Modality use in joint attention between hearing parents and deaf children. Frontiers in Psychology, 6, 1556. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Frank MC, Slemmer JA, Marcus GF, & Johnson SP. (2009). Information from multiple modalities helps 5-month-olds learn abstract rules. Developmental Science, 12(4), 504–509. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fitzpatrick EM, Ham J, & Whittingham J. (2015). Pediatric cochlear implantation: Why do children receive implants late? Ear and Hearing, 36, 688. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fitzpatrick EM, Hamel C, Stevens A, Pratt M, Moher D, Doucet SP, Neuss D, Bernstein A, & Na E. (2016). Sign language and spoken language for children with hearing loss: A systematic review. Pediatrics, 13, e20151974. [DOI] [PubMed] [Google Scholar]
- Gabouer A, Oghalai J, & Bortfeld H. (2018). Hearing Parents’ Use of Auditory, Visual, and Tactile Cues as a Function of Child Hearing Status. International Journal of Comparative Psychology, 31. [Google Scholar]
- Gale E, & Schick B. (2009). Symbol-infused joint attention and language use in mothers with deaf and hearing toddlers. American Annals of the Deaf, 153, 484–503. [DOI] [PubMed] [Google Scholar]
- Hall M, Eigsti IM, Bortfeld H, & Lillo-Martin D. (2017). Auditory deprivation does not impair executive function, but language deprivation might: evidence from a parent-report measure in deaf native signing children. Journal of Deaf Studies and Deaf Education, 22, 9–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hall ML, Eigsti IM, Bortfeld H, & Lillo-Martin D. (2018). Executive function in deaf children: Auditory access and language access. Journal of Speech, Language, and Hearing Research, 61, 1970–1988. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hall ML, Eigsti I, Bortfeld H, & Lillo-Martin D. (2018). Auditory access, language access, and implicit sequence learning in deaf children. Developmental Science, 21, 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, & Conde JG (2009). Research electronic data capture (REDCap) - A metadata-driven methodology and workflow process for providing translational research informatics support. Journal of Biomedical Informatics, 42, 377–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hoff S, Ryan M, Thomas D, Tournis E, Kenny H, Hajduk J, & Young NM. (2019). Safety and Effectiveness of Cochlear Implantation of Young Children, Including Those With Complicating Conditions. Otology & Neurotology, 40, 454. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Koester LS & Lahti-Harper E. (2010). Mother-infant hearing status and intuitive parenting behaviors during the first 18 months. American Annals of the Deaf, 155, 5–18. [DOI] [PubMed] [Google Scholar]
- Korver AM, Smith RJ, Van Camp G, Schleiss MR, Bitner-Glindzicz MA, Lustig LR, … & Boudewyns AN. (2017). Congenital hearing loss. Nature Reviews Disease Primers, 3(1), 1–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Krippendorff K. (2011). Computing Krippendorff’s alpha-reliability. Retrieved from http://repository.upenn.edu/asc_papers/43
- Lieberman A, Hatrak M, & Mayberry RI. (2011). The development of eye gaze control for linguistic input in deaf children. Proceedings of the 35th Annual Boston University Conference on Language Development, 108, 91–404. [Google Scholar]
- Lieberman A, Hatrak M, & Mayberry R. (2014). Learning to look for language: development of joint attention in young deaf dhildren. Language Learning and Development, 10, 37–41. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lund E, & Schuele CM. (2015). Synchrony of maternal auditory and visual cues about unknown words to children with and without cochlear implants. Ear and Hearing, 36, 229–238. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Marx V, & Nagy E. (2017). Fetal behavioral responses to the touch of the mother’s abdomen: A Frame-by-frame analysis. Infant Behavior and Development, 47, 83–91. [DOI] [PubMed] [Google Scholar]
- McHugh ML. (2012). Interrater reliability: The kappa statistic. Biochemia Medica: Biochemia Medica, 22, 276–282. [PMC free article] [PubMed] [Google Scholar]
- Mehra S, Eavey RD, & Keamy DG Jr (2009). The epidemiology of hearing impairment in the United States: newborns, children, and adolescents. Otolaryngology-Head and Neck Surgery, 140, 461–472. [DOI] [PubMed] [Google Scholar]
- Mitchell RE, & Karchmer MA. (2004). Chasing the mythical ten percent: Parental hearing status of deaf and hard of hearing students in the United States. Sign Language Studies 4, 138–163. [Google Scholar]
- Miyamoto RT, Colson B, Henning S, & Pisoni D. (2017). Cochlear implantation in infants below 12 months of age. World Journal of Otorhinolaryngology-Head and Neck Surgery, 3(4), 214–218. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Morales M, Mundy P, Delgado CE, Yale M, Messinger D, Neal R, & Schwartz HK. (2000). Responding to joint attention across the 6-through 24-month age period and early language acquisition. Journal of Applied Developmental Psychology, 21, 283–298. [Google Scholar]
- Morales M, Mundy P, & Rojas J. (1998). Following the direction of gaze and language development in 6-month-olds. Infant Behavior and Development, 21(2), 373–377. [Google Scholar]
- Mundy P, Delgado C, Block J, Venezia M, Hogan A, & Seibert J. (2003). A manual for the abridged Early Social Communication Scales (ESCS). Coral Gables, FL: University of Miami, Department of Psychology. [Google Scholar]
- Mundy P, & Gomes A. (1998). Individual differences in joint attention skill development in the second year. Infant Behavior and Development, 21, 469–482. [Google Scholar]
- Nowakowski ME, Tasker SL, & Schmidt LA. (2009). Establishment of joint attention in dyads involving hearing mothers of deaf and hearing children, and its relation to adaptive social behavior. American Annals of the Deaf, 154, 15–29. [DOI] [PubMed] [Google Scholar]
- Roos EM, McDuffie AS, Weismer S, & Gernsbacher M. (2008). A comparison of contexts for assessing joint attention in toddlers on the autism spectrum. Autism, 12, 275–291. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Seibert JM, Hogan AE, & Mundy PC. (1982). Assessing interactional competencies: The early social‐communication scales. Infant Mental Health Journal, 3, 244–258. [Google Scholar]
- Sorkin DL. (2013). Cochlear implantation in the world’s largest medical device market: utilization and awareness of cochlear implants in the United States. Cochlear Implants International, 14, S12–S4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Spencer PE, Bodner-Johnson BA, & Gutfreund MK. (1992). Interacting with infants with a hearing loss: What can we learn from mothers who are deaf? Journal of Early Intervention, 16, 64–78. [Google Scholar]
- Spencer PE. (2000). Looking without listening: Is audition a prerequisite for normal development of visual attention during infancy? Journal of Deaf Studies and Deaf Education, 5, 291–302. [DOI] [PubMed] [Google Scholar]
- Spencer PE. (2004). Individual differences in language performance after cochlear implantation at one to three years of age: Child, family, and linguistic factors. Journal of deaf studies and deaf education, 9, 395–412. [DOI] [PubMed] [Google Scholar]
- Suarez-Rivera C, Smith LB, & Yu C. (2019). Multimodal parent behaviors within joint attention support sustained attention in infants. Developmental Psychology, 55, 96–109. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tomasello M, & Farrar MJ. (1986). Joint attention and early language. Child Development, 57, 1454–1463. [PubMed] [Google Scholar]
- Vanormelingen L, De Maeyer S, & Gillis S. (2016). A comparison of maternal and child language in normally-hearing and hearing-impaired children with cochlear implants. Language, Interaction and Acquisition, 7, 145–179. [Google Scholar]
- Waxman RP, & Spencer PE. (1997). What mothers do to support infant visual attention: Sensitivities to age and hearing status. Journal of Deaf Studies and Deaf Education, 2, 104–114. [DOI] [PubMed] [Google Scholar]
- Wetherby A, & Prizant B. (2002). Communication and symbolic behavior scales: Developmental profile-first normed edition. Baltimore, MD: Paul H. Brookes. [Google Scholar]
- Wittenburg P, Brugman H, Russel Al., Klassman A, & Sloetjes H. (2006). ELAN: A professional framework for multimodality research. In Proceedings of LREC 2006, Fifth International Conference on Language Resources and Evaluation Retrieved from https://tla.mpi.nl/tools/tla-tools/elan/ [Google Scholar]


