Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Nov 1.
Published in final edited form as: Infant Behav Dev. 2019 May 15;57:101322. doi: 10.1016/j.infbeh.2019.04.004

Effects of children’s hearing loss on the synchrony between parents’ object naming and children’s attention

Chi-hsin Chen 1, Irina Castellanos 1,2, Chen Yu 3, Derek M Houston 1,2
PMCID: PMC6856413  NIHMSID: NIHMS1529401  PMID: 31102946

Abstract

Children’s attentional state during parent-child interactions is important for word learning. The current study examines the real-time attentional patterns of toddlers with and without hearing loss (N = 15, age range: 12 - 37 months) in parent-child interactions. High-density gaze data recorded from head-mounted eye-trackers were used to investigate the synchrony between parents’ naming of novel objects and children’s sustained attention on the named objects in joint play. Results show that the sheer quantities of parents’ naming and children’s sustained attention episodes were comparable in children with hearing loss and their peers with normal hearing. However, parents’ naming and children’s sustained attention episodes were less synchronized in the hearing loss group compared to children with normal hearing. Possible implications are discussed.

Keywords: parent-child interactions, synchrony in interaction, children’s attention, word learning, children with hearing loss, eye-tracking

Introduction

Many children with hearing loss display spoken language delays even after long-term use of hearing aids or cochlear implants (for a review, see Lederberg, Schick, & Spencer, 2013). The majority of extant research on the language outcomes of children with hearing loss has focused on audiological, demographic, environmental, and linguistic factors (e.g., Boons et al., 2012; Cruz, Quittner, Marker, & DesJardin, 2013; Nicholas & Geers, 2006; Niparko et al., 2010; VanDam, Ambrose, & Moeller, 2012). Another important, yet unexplored, area is how children’ attention in real-time parent-child interactions affects their language development. A growing body of evidence with hearing children suggests that infants’ sustained attention to objects during object play is associated with their learning of novel words and long-term language development (Yu & Smith, 2012; Yu, Suanda, & Smith, 2018). Some previous studies suggest that children with hearing loss and their hearing peers show different attentional patterns in visual selection tasks (Dye & Hausesr, 2014; Smith, Quittner, Osberger, & Miyamoto, 1998). However, it is unknown whether or how hearing loss affects children’s attention in naturalistic interactions and whether the attentional patterns of children with hearing loss in object play also affects their word learning and long-term language development. As a first step to fill this gap, the overarching goal of the current study is to examine the real-time attentional patterns of children with and without hearing loss during parent-child interactions. Specifically, we use head-mounted eye-trackers to record high-density gaze data and investigate the synchrony between parents’ naming of novel objects and children’s sustained attention in joint object play.

Synchrony between Parents’ Naming and Children’s Attention in Parent-child Interactions

Most of infants’ information processing takes place during moments of sustained attention (Colombo, 2011; Frick & Richards, 2001; Richards, 1997). In addition, infants’ focused or sustained attention during object play predicts later language and cognitive development (Kannass & Oakes, 2008; Lawson & Ruff, 2004; Yu et al., 2018). Relatedly, naming objects that children attend to is positively correlated with the number of words they learn in both experimental and natural settings (MacRoy-Higgins & Montemarano, 2016; Tomasello & Farrar, 1986). These findings suggest that the synchrony between parents’ naming of an object and children’s sustained attention to it may be an important factor in word learning. To further break it down, the synchrony is jointly created by both parents and children -- one component being parents’ naming of objects and the other being children’s sustained attention to the objects. Sometimes, parents’ naming may start before children’s onset of sustained attention while other times children’s sustained attention to an object precedes parents’ naming. Regardless of which event starts first, in both cases, these two events may overlap in time and are synchronous with each other. The main goal of the current study is to investigate, in natural parent-child interactions, whether children’s hearing status affects how often parents and children jointly create these synchronous moments.

We divided our main goal into three sub-goals, each focusing on a different set of measures: parents’ naming, children’s sustained attention, and the synchrony between these two types of events. The current literature suggests potential group differences in these three sets of measures. First, some studies have shown that hearing parents of children with hearing loss provide similar amount of linguistic input, as measured by total word or utterance counts, compared to hearing parents of children with normal hearing (e.g., Chen, Castellanos, Yu, & Houston, in press; Fagan, Bergeson, & Morris, 2014; VanDam et al., 2012). However, it has also been found that hearing parents of children with hearing loss tend to use more directives and prohibitions in their interactions (e.g., Fagan et al., 2014; Henggeler, Watson, & Copper, 1984). Given the interactional style differences, it is possible that parents of children with hearing loss produce fewer naming instances during interactions. Second, some previous studies suggest that children with hearing loss and their hearing peers show different attentional patterns in visual selection tasks (Dye & Hausesr, 2014; Smith et al., 1998). One possible effect of these attentional differences is that children with hearing loss may create fewer sustained attention moments in naturalistic contexts, as they have been shown to be more easily distracted by task-irrelevant information (e.g., Dye & Hauser, 2014). Finally, the interactions between hearing parents and children with hearing loss have been shown to be less synchronous and they spend less time in joint attention or joint engagement compared to the interactions of hearing parents with hearing children (Cejas, Barker, Quittner, & Niparko, 2014; Fagan et al., 2014). It is possible that there is no difference between the sheer quantities of parents’ naming instances and children’s sustained attention moments in children with hearing loss and their hearing peers. However, the quality of input, as measured by the temporal synchrony or overlap between parents’ naming and children’s sustained attention, may be greater in the hearing group. If parents’ naming and children’s sustained attention both occur frequently but do not overlap, or if parents’ naming of an object overlaps with children’s sustained attention on a different object, then the quality of the naming context is not as good as when the parent names an object when the child shows sustained attention to the same object. Therefore, it is important to examine the extent to which these two types of events are temporally coupled during parent-child interactions in children with and without hearing loss.

Current Study

We recruited a group of toddlers with hearing loss (HL) and two groups of children with normal hearing, one group matched to the HL children in chronological age (CA) and another matched to them in hearing age (HA). In the experiment, children engaged in an object-play session with their parents. The inclusion of these three groups of children allowed us to test 1) whether the patterns seen in the HL group were more similar to children with the same chronological age and overall developmental level or to children with similar hearing experiences; or 2) whether they were affected by children’s hearing status per se.

We used head-mounted eye-trackers to collect high-density gaze data and recorded parents’ speech during free-flowing parent-child interactions. With these fine-grained data, we conducted three sets of data analyses. First, we calculated parents’ naming frequency and compared whether parents of children in different groups differed in their frequency of naming novel objects in joint object play. Second, we examined the duration of children’s looks to objects and asked whether children with normal hearing and children with hearing loss displayed differences in the frequency in which they produced sustained fixations to objects. Based on previous studies on visual attention development, we operationally defined sustained fixations, or sustained attention episodes, as looks lasting 3 seconds or longer (Kannas & Oakes, 2008; Lawson & Ruff, 2004; Yu et al., 2018). Third, we measured the overlap between parents’ naming of objects and children’s sustained attention on the named objects. The first two sets of analyses allowed us to check the base rate of parents’ naming and children’s sustained attention episodes. The last set of analyses enabled us to focus on the temporal synchrony between these two types of events. These three sets of analyses together address the key question of the present study -- whether potential differences in parent-child interactions among the three groups lie primarily in the quantity of parents’ naming, the quantity of children’s sustained attention episodes, or the coordination between these two types of events.

Method

Participants

Participants were 15 parent-toddler dyads. Most of the parents were mothers; only one was a father (in HA group, definition of HA see below). Children in 5 dyads had hearing loss (HL, see Table 1). Children in another 5 dyads had normal hearing and were matched to the HL group by chronological age (plus or minus 2 months, subsequently termed CA group) and the other 5 children also had normal hearing and were matched to the HL group by hearing age (subsequently termed HA group). Children’s hearing age was used as a rough estimate of their receptive language experience. The hearing age of children in the HL group was determined by the time since they were fitted with hearing aid(s) or cochlear implants. The hearing age of children in the CA and HA groups was calculated based on their chronological age. The entire sample of participants was broadly representative of the State of Indiana (86% European American, 10% African American, 4% Asian, Hispanic, and other), and consisted of predominantly working and middle-class families. Toddlers were recruited through referrals by clinicians (HL children), birth records, and community organizations (e.g., children's outreach evens). Recruitment and experimental procedures were approved by the University Institutional Review Board and all parents gave informed consent prior to participation.

Table 1.

Participant Information

HL CA HA
Participant
#
Chronological
Age
Hearing
Age
Sex Degree of Hearing
Loss
Hearing Device Age Age
Left Right Left Right
1 27 22 M severe hearing aid hearing aid 25 23
2 30 10 F severe to profound cochlear implant cochlear implant 28 12
3 34 14 F severe cochlear implant 35 14
4 36 25 F profound mild-moderate hearing aid 36 24
5 37 12 M profound severe cochlear implant hearing aid 36 13
32.8 16.6 32.0 17.2

Note: All ages are reported in months. HL: children with hearing loss, CA: chronological-age-matched children with normal hearing, HA: hearing-age-matched children with normal hearing.

Design

Parents and their toddlers were invited to the lab to engage in an object-play session (Fig. 1). During the experiment, both participants wore a head-mounted eye-tracker that recorded their gaze (Positive Science, http://www.positivescience.com/, also see Franchak, Kretch, Soska, & Adolph, 2011). The experiment was divided into 4 trials, and each trial lasted 1.5 minutes. The whole interaction time lasted 6 minutes. In each trial, the participants played with 3 novel objects. The objects were similar in size (average size: 288 cm3), but each had a distinctive color (blue, green, or red) and was given a novel name (e.g., tema, dodi). Two sets of objects were used. Participants played with each set twice in an alternating order. The order of the object sets was counter-balanced across participants. Parents were instructed to play with their children like they normally would. However, they were asked to use the novel names if they wanted to refer to the objects.

Fig. 1.

Fig. 1.

The parent and child sat across from each other and played with 3 novel objects in each trial. Both participants wore a head-mounted eye-tracker. Each eye-tracker was composed of an eye camera which recorded eye movements and a scene camera which recorded the first-person view. Additionally, the parent’s eye-tracker had a microphone, which recorded the parent’s speech.

Data Recording

Each eye-tracker was composed of two cameras, an eye camera pointing to the participant’s right eye, which recorded eye movements, and a scene camera placed on the participant’s forehead, which recorded what was in front of them. The eye-trackers recorded at a sampling rate of 30 Hz. This yielded approximately 10,800 frames from each camera for the 6-minute object play session. In addition to gaze data, parents’ speech was recorded by a mini-microphone incorporated into their eye-tracker. Even though we recorded gaze data for both parents and children, in the current study, we focused on children’s gazes and examined their relationship with parents’ naming of the novel objects in play.

Data Coding

Gaze data was coded frame by frame. Three regions of interest (ROIs) were identified in each trial, one for the blue, one for the green, and one for the red object. Trained coders coded whether children’s gaze overlapped with any of the ROIs in each frame, and if so, which ROI. Participants did not show significant differences in the numbers of looks to the three ROIs across the two object sets used in the study (Wald χ2= 1.260, n.s.). Therefore, data from the two sets of objects were combined in the analyses. In total, children generated 2,036 looks to the three ROIs during the experiment. On average, children spent 66% of the total trial time looking at the three ROIs (Range: 45% - 84%). Visual fixations to the three ROIs served as the gaze data in the following analyses (detailed information about ROI coding can be seen in Appendix B in Yu & Smith, 2017).

Parents’ speech was transcribed into utterances following the SALT (Systematic Analysis of Language Transcripts, Miller & Chapman, 1985) convention. The timing of the onset and offset of each utterance was determined objectively based on the waveform using the open-source program Audacity (https://www.audacityteam.org/). We then identified the utterances in which novel object names occurred (subsequently termed naming utterances). There was a total of 322 naming utterances, which served as the speech data in the following analyses.

Dependent Variables

The dependent variables in the current study belonged to 3 major categories: parents’ naming utterances, children’s ROI looks, and measures of synchrony. We will first report the number and duration of parents’ naming utterances across groups. Following that, we will analyze whether there were group differences in the duration of children’s all ROI looks (of any length) and the proportion of children’s ROI looks defined as sustained fixations (i.e., looks lasting 3 seconds or longer). To investigate the synchrony between parents’ naming utterances and children’s sustained fixations (i.e., looks > 3s), we categorized parents’ utterances into Hits and Misfires. As illustrated in Fig. 2, a Hit was a naming utterance of an object that overlapped, either partially or completely, with children’s sustained fixation to the same object; while a Misfire was a naming utterance of an object which did not overlap with children’s sustained attention to the same object. Based on these categories, we will first examine whether there were group differences in the proportion of parents’ naming utterances categorized as Hits (which is complementary to the proportion of Misfires). The Hit instances were further categorized into instances partially overlapping with children’s sustained fixations and instances completely overlapping with children’s sustained fixations. The latter subcategory of Hits we calculated to investigate group differences in parents’ naming that occurred within children’s sustained fixation moment. That is, parents named an object which children were already fixating and the children’s fixation on the named object lasted until after the end of the utterance. Finally, we will report the mean proportion of parental utterance duration that overlapped with children’s sustained attention. That is, for every naming utterance we calculated what proportion of its duration (0.0-1.0) overlapped with child’s sustained attention to the named object.

Fig. 2.

Fig. 2.

Hits vs. Misfires. (A) A Hit was a parent’s naming utterance of an object that overlapped, either partially or completely, with a child’s sustained fixation to the same object. (B) A Misfire was a naming utterance that did not overlap with a child’s sustained fixation to the same object.

Results

In order to examine how hearing loss may affect the synchrony between parents’ naming behavior and children’s attention in joint play, we first compared the amount and duration of parents’ naming instances across groups. Second, we investigated the overall pattern of children’s gaze data by calculating the mean duration of children’s ROI looks and further focused on looks lasting 3 seconds or longer, which we defined as sustained fixations or sustained attention episodes. Finally, we examined the synchrony between parents’ naming of an object and children’s sustained attention to the same object. We will report three sets of analyses each tapping a different aspect of synchrony, 1) the Hit (and Misfire) rates across groups, 2) the proportion of Hits completely overlapping with children’s sustained attention episodes, and 3) the mean proportion of parents’ naming utterance duration that overlapped with children’s sustained attention to the named objects. Detailed behavioral measures for individual child in the HL group and their relationships to children’s demographic characteristics can be found in the Appendix.

Parents’ Naming Utterances

Overall, parents across groups produced similar amounts of utterances that included the names of the novel objects during play (HL: N = 101, CA: N = 120, HA: N = 101). Between-group ANOVAs showed that there was no significant group difference in terms of the mean number of naming utterances produced by parents (HL: M = 20.20; CA :M = 24.00; HA: M = 20.20, Group difference: F(2, 12) = 0.08, n.s.), nor were there any differences in the mean duration of utterances (HL: M = 1.38 s; CA: M = 1.35 s; HA: M = 1.35 s, Group difference: F(2, 12) = 0.90, n.s.). These results suggest that parents’ overall naming behavior did not differ as a function of group.

Children’s Object Fixations

We first report overall ROI look durations in different groups and then compare the proportions of sustained fixations (i.e., looks lasting 3 seconds or longer) across groups. Instead of using conventional linear regression models which assume independence and normal distribution of observations, we used Generalized Estimating Equations (GEE) in all gaze-related analyses to account for the non-independence of data, to allow for testing of non-normal distributions, and to have more robust estimates of errors for the complex gaze data (Liang & Zeger, 1986).

As shown in Fig. 3A, children across groups had similar object look durations and there was no significant group difference (HL: M = 1.80 s; CA: M = 2.13 s; HA: M = 2.00 s, Group difference: Wald χ2 = 1.13, n.s.). They also produced comparable proportions of fixations that lasted 3 seconds or longer (Fig. 2B-D, HL: 0.17; CA: 0.22; HA: 0.20, Group difference: Wald χ2 = 1.26, n.s.). These results suggest that the overall quantity of gaze data was not different across groups and that children in different groups generated a similar number of sustained fixations to objects during the play session.

Fig. 3.

Fig. 3.

Duration of child ROI looks. (A) Mean ROI look duration (and SE) across groups. (B) ROI look distribution in the HL group. (C) ROI look distribution in the CA group. (D) ROI look distribution in the HA group. The red boxes highlight the percentage (and number) of all ROI looks that are longer than 3 seconds.

Synchrony between Parents’ Naming Utterances and Children’s Sustained Attention

The next three sets of analyses examined the synchrony between parents’ naming utterances and children’s sustained attention. In the first two sets of analyses, we examined the proportion of the number of naming utterances overlapping partially or completely with children’s sustained fixations on the same target object. These two analyses provide an overall picture of the synchrony between parents’ naming and children’s sustained attention. In the last set of analyses, we further analyzed how much children attended to the named objects during those naming moments by calculating the proportion of utterance duration overlapping with children’s sustained attention episodes.

Hits and Misfires

First, we analyzed the proportions of parents’ utterances categorized as Hits and Misfires across groups. Because Hits and Misfires are complementary to each other, the Hit rate (see Total Hits in Table 2) and Misfire rate add up to 1 in each group.

Table 2.

Proportion of Parents’ Utterances Categorized as Hits and Misfires.

Group Hits Misfires
Partial overlap Complete overlap Total Hits
HL 0.11 0.16 0.27 0.73
CA 0.14 0.32 0.46 0.54
HA 0.13 0.27 0.40 0.60

As shown in Table 2, the Total Hit rate for the HL group was lower compared to the CA and HA groups. These group differences were significant (Wald χ2 = 6,849, p < .05). Pairwise comparisons indicate that the Hit rate for the CA group was significantly higher than the Hit rate for the HL group (p = .009), and the difference between the HA and HL groups approached statistical significance (p = .052). These results suggest that there was better synchrony between parents’ naming and children’s sustained attention in the CA group than in the HL group.

Hits Completely Overlapping with Sustained Fixations

Next, we further focused on the Hit instances that overlapped completely with sustained attention episodes, because these were the naming instances that occurred within children’s sustained attention moments (see Table 2). For the HL group, a smaller proportion of parents’ naming instances occurred within children’s sustained attention moments than the CA and HA groups. These group differences were significant (Wald χ2= 7.27, p < .05). Pairwise comparisons showed that both the CA and HA groups had more utterances occurring within children’s sustained attention moments than the HL group (CA > HL at p =.006, HA > HL at p = .035). Along with the analyses on Total Hits, these results suggest that parents’ naming utterances were more likely to overlap with children’s sustained attention in the two hearing groups and that parents in the two hearing groups tended to name the objects within children’s sustained attention moments.

Utterance Duration Overlapping with Sustained Fixation

The analyses above focused on the degree of synchrony by examining whether or not parents’ naming overlapped (partially or completely) with children’s sustained attention. The last set of analyses further focused on the amount of time children looked at the named objects during those naming moments. From a learning perspective, the more the child attends to an object while it is being named, the better. For each naming utterance, we calculated the proportion of the utterance duration overlapping with children’s sustained attention. As shown in Fig. 4, the proportion of overlap was lower in the HL group than the CA and HA groups (HL: M = .21; CA: M = .39; HA: M = .33, Group difference: Wald χ2 = 12.59, p < .01). Pairwise comparisons showed significant differences between the HL group and the CA and HA groups (HL < CA at p =.001, HL < HA atp = .044). These results suggest that, in the two hearing groups, children looked at the named objects for a longer duration when parents named those novel objects than children in the HL group did.

Fig. 4.

Fig. 4.

Mean proportion of utterance duration (and SE) overlapping with children’s sustained attention across groups.

Discussion

Prior literature indicates that the quantity of linguistic input has a positive association with children’s vocabulary development (e.g., Rowe, 2008, 2012). It has also been shown that children’s sustained attention predicts later language development (Kannass & Oakes, 2008; Yu, et al., 2018). The current study suggests that the quantity of parents’ naming of novel objects and children’s sustained attention were comparable in children with hearing loss and their chronological- and hearing-age-matched peers. However, there were significant differences in the temporal synchrony of these two types of events in the HL group compared to their CA and HA peers. These group differences seem to arise as a function of children’s hearing status per se, and not driven by children’s chronological age or hearing experience.

Synchrony between Parents’ Naming and Children’s Sustained Attention

There is a growing body of research, with both children with and without hearing loss, examining the synchrony of multimodal cues, such as gazes and hand actions, parents provide to their children in interactions (e.g., Gogate, Bahrick, & Watson, 2000; Lund & Schuele, 2015; Suanda, Smith, & Yu, 2016). One explicit or implicit message from these studies is that the synchrony of multimodal cues contributes to children’s attention to objects, particularly when they are being named, and this can facilitate word learning. Empirical evidence has shown that synchronous multimodal cues do aid in word learning (Rader & Zukow-Goldring, 2012). In the current study, instead of focusing on the synchrony of multimodal cues, which potentially attracts children’s attention to the objects being named, we directly examined the synchrony between children’s sustained attention to objects and parents’ naming of the same objects in free-flowing interaction.

Consistent with prior research using overall word or utterance counts, our study suggests that parents of children with hearing loss provide comparable amounts of naming instances as parents of hearing children (Chen et al., in press; Fagan et al., 2014; VanDam et al., 2012). Some prior studies using visual selection tasks revealed that children with hearing loss were more likely to be distracted by task-irrelevant information (e.g., Dye & Hauser, 2014). Interestingly, we found that, in naturalistic object-play contexts, children with hearing loss generated similar amounts of sustained attention episodes as their hearing peers. These two sets of findings indicate that, by the sheer quantities of naming input and sustained attention moments, the word-learning environment of HL children created in parent-child toy play is similar to their CA and HA peers. However, when we examined temporal synchrony between these two types of events, we found that they were less synchronized in the HL group as compared to the CA and HA groups. Naming objects that children pay attention to is positively associated with children’s vocabulary development (Tomasello & Farrar, 1986; Yu et al., 2018). The less coupled/synchronized patterns seen in the HL group may cause difficulties in learning novel words; and this may have cascading effects on their long-term language development (Houston, Stewart, Moberly, Hollich, & Miyamoto, 2012).

Parents’ Sensitivity to Children’s Attentional State

The comparisons among the HL, CA, and HA groups suggest that children’s hearing status may contribute to the differences among the three groups. Moreover, children’s hearing status and parents’ perception of their child’s status may in turn affect how parents interact with HL children as they jointly created learning experiences for the children. Research with hearing children indicates that parental sensitivity or responsiveness facilitates children’s language development (McDuffie & Yoder, 2010; Tamis-LeMonda, Kuchirko, & Song, 2014). One way it can be achieved is by providing the names of novel objects at the time when children pay attention to them. The analyses on the Hit rates suggest that parents’ naming of an object and children’s sustained attention to the named object were more likely to overlap in the CA and HA groups than in the HL group. The overlap between the parent’s naming behavior and the child’s attentional behavior can be created by 1) the parent names an object that the child is already paying attention to, or 2) the child looks towards an object that the parent is naming. In the first scenario, the parent’s naming starts after the onset of the child’s sustained attention. This scenario roughly reflects parents’ sensitivity, as they are contiguous and likely to be contingent on children’s attentional state (Tamis-LeMonda et al., 2014).

The analyses on the Hit instances in which parents’ utterances overlapped completely with children’s sustained attention is relevant to the first scenario, because those are the naming instances occurring within the child’s sustained attention moments. Our results showed significant group differences between children with and without hearing loss. One explanation is that parents in the HL group may be more directive and (therefore) less sensitive to children’s attentional state (Castellanos, Pisoni, Yu, Chen, & Houston, 2018). Alternatively, it could also be that they are less attuned to providing information about the object of children’s interest at the optimal time. Either way, these results may provide insights for intervention programs targeting at improving parents’ sensitivity or responsiveness to children’s attentional state. For example, one possibility is to train parents to become more sensitive to their children’s attentional state and encourage them to talk about the objects of children’s interest by providing object labels and/or relevant information about the objects (e.g., functions or features). This may facilitate children’s learning about object concepts and their names and improve long-term language development.

Future Work

One thing to note is that, in our study, the sample size per group (i.e., 5 dyads) was small compared to many studies using screen-based looking measures with fixed trials (e.g., Fernald, Swingley, Pinto, 2001; Tinkoff & Jusczyk, Waxman & Markow, 1995). However, because we coded frame-by-frame of children’s looking behaviors in naturalistic interactions, we collected an average of 136 ROI looks per child (2036 ROI looks/15 children). Compared to studies with fixed trials, which are usually consisted of no more than 20-30 trials, the amount of gaze data we collected for each child was extremely huge. Prior studies on infants’ sensorimotor development have shown that high-density data collected within individuals can produce reliable and generalizable results, even with a small sample size (Yoshida & Smith, 2008; Yu & Smith, 2012). The high-density gaze data we collected allowed us to examine children’s looking behaviors in continuous natural interactions and investigate how those looking behaviors were related to parents’ linguistic input. This type of rich dataset has many potential uses. For example, children with hearing loss is a heterogeneous group, which includes children with different levels of hearing loss, different etiologies, and different device uses. One future direction is to include a larger sample and separate children with different demographic characteristics and examine whether children’s level of hearing loss or device use interact with their attentional patterns and how their parents talk to them.

Conclusions

Our study is the first to use high-density gaze data collected from real-time interactions to investigate the synchrony between parents’ naming behavior and episodes of sustained attention in children with hearing loss. Even though dyads in the HL, CA, and HA groups produced the same quantity of naming utterances and sustained attention episodes, these two types of events were less synchronized in the HL group. These results provide the groundwork for future studies to explore the cascading effects of temporal synchrony in parent-child interactions on language and cognitive development in both typically-developing children and in clinical populations.

Research Highlights.

  • The current study examines the real-time attentional patterns of toddlers with and without hearing loss in parent-child interactions.

  • We used head-mounted eye-trackers to investigate the synchrony between parents’ naming of novel objects and children’ s sustained attention on the named objects in play.

  • The sheer quantities of parents’ naming and children’ s sustained attention episodes were comparable in children with hearing loss and their peers with normal hearing.

  • However, parents’ naming and children’ s sustained attention episodes were less synchronized in children with hearing loss compared to their peers with normal hearing.

Acknowledgements

This research was supported by grants from the National Institute on Deafness and Other Communication Disorders (T32 DC00012), the National Institutes of Health (R01 HD074601), and the Indiana University Collaborative Research Grant. We thank Heidi Neuberger, Steven Elmlinger, Charlene Ty, Mellissa Hall and Seth Foster for help with data collection and members in the Computational Cognition and Learning Laboratory and the DeVault Otologic Research Laboratory for help with coding. We also thank Seth Foster and Tian (Linger) Xu for developing data management and processing software.

Appendix

HL Group’s Demographic Information and Behavioral Results

Participant
#
CA HA Degree of Hearing Loss Hearing Device Children’s
# of ROI
looks
Children’s
sustained
attention
episodes
% of Hits for
parents’
utterances
% of utterance
duration
overlapping
with
sustained
attention
Left Right Left Right
1 27 22 severe Hearing aid Hearing aid 122 25 32 25
2 30 10 severe to profound Cochlear implant Cochlear implant 106 23 41 27
3 34 14 severe Cochlear implant 137 27 15 10
4 36 25 profound mild-moderate Hearing aid 118 22 16 13
5 37 12 profound severe Cochlear implant Hearing aid 247 20 27 25

CA: chronological age; HA: hearing age

Footnotes

Conflict of Interest Statement

We declare that we have no conflict of interest.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Boons T, Brokx JPL, Dhooge I, Frijns JHM, Peeraer L, Vermeulen A, Wouters J, & van Wieringen A (2012). Predictors of spoken language development following pediatric cochlear implantation. Ear and Hearing, 33(5), 617–639. [DOI] [PubMed] [Google Scholar]
  2. Castellanos I, Pisoni DB, Yu C, Chen C, & Houston DM (2018). Embodied cognition in prelingually deaf children with cochlear implants: Preliminary findings In Knoors H, & Marschark M (Eds.), Evidence-based practice in deaf education. New York: Oxford University Press. [Google Scholar]
  3. Cejas I, Barker DH, Quittner AL, & Niparko JK (2014). Development of joint engagement in young deaf and hearing children: effects of chronological age and language skills. Journal of Speech, Language, and Hearing Research, 57(5), 1831–1841. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Chen C, Castellanos I, Yu C, & Houston DM (in press). Parental linguistic input and its relation to toddlers’ visual attention in joint object play: A comparison between children with normal hearing and children with hearing loss. Infancy. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Colombo J (2001). The development of visual attention in infancy. Annual Review of Psychology, 52(1), 337–367. [DOI] [PubMed] [Google Scholar]
  6. Cruz I, Quittner AL, Marker C, & DesJardin JL (2013). Identification of effective strategies to promote language in deaf children with cochlear implants. Child Development, 84(2), 543–559. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Dye MW, & Hauser PC (2014). Sustained attention, selective attention and cognitive control in deaf and hearing children. Hearing Research, 309, 94–102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Fagan MK, Bergeson TR, & Morris KJ (2014). Synchrony, complexity and directiveness in mothers’ interactions with infants pre-and post-cochlear implantation. Infant Behavior and Development, 37(3), 249–257. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Fernald A, Swingley D, & Pinto JP (2001). When half a word is enough: Infants can recognize spoken words using partial phonetic information. Child development, 72(4), 1003–1015. [DOI] [PubMed] [Google Scholar]
  10. Franchak JM, Kretch KS, Soska KC, & Adolph KE (2011). Head-mounted eye tracking: A new method to describe infant looking. Child Development, 82(6), 1738–1750. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Frick JE, & Richards JE (2001). Individual differences in infants' recognition of briefly presented visual stimuli. Infancy, 2(3), 331–352. [DOI] [PubMed] [Google Scholar]
  12. Gogate LJ, Bahrick LE, & Watson JD (2000). A study of multimodal motherese: The role of temporal synchrony between verbal labels and gestures. Child Development, 71(4), 878–894. [DOI] [PubMed] [Google Scholar]
  13. Henggeler SW, Watson SM, & Cooper PF (1984). Verbal and nonverbal maternal controls in hearing mother-deaf child interaction. Journal of Applied Developmental Psychology, 5(4), 319–329. [Google Scholar]
  14. Houston DM, Stewart J, Moberly A, Hollich G, & Miyamoto RT (2012). Word learning in deaf children with cochlear implants: Effects of early auditory experience. Developmental Science, 15(3), 448–461. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Kannass KN, & Oakes LM (2008). The development of attention and its relations to language in infancy and toddlerhood. Journal of Cognition and Development, 9(2), 222–246. [Google Scholar]
  16. Lawson KR, & Ruff HA (2004). Early focused attention predicts outcome for children born prematurely. Journal of Developmental & Behavioral Pediatrics, 25(6), 399–406. [DOI] [PubMed] [Google Scholar]
  17. Lederberg AR, Schick B, & Spencer PE (2013). Language and literacy development of deaf and hard-of-hearing children: successes and challenges. Developmental Psychology, 49(1), 15–30. [DOI] [PubMed] [Google Scholar]
  18. Liang KY, & Zeger SL (1986). Longitudinal data analysis using generalized linear models. Biometrika, 73(1), 13–22. [Google Scholar]
  19. Lund E, & Schuele CM (2015). Synchrony of maternal auditory and visual cues about unknown words to children with and without cochlear implants. Ear and hearing, 36(2), 229–238. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. MacRoy-Higgins M, & Montemarano EA (2016). Attention and word learning in toddlers who are late talkers. Journal of child language, 43(5), 1020–1037. [DOI] [PubMed] [Google Scholar]
  21. McDuffie A, & Yoder P (2010). Types of parent verbal responsiveness that predict language in young children with autism spectrum disorder. Journal of Speech, Language, and Hearing Research, 53(4), 1026–1039. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Miller J, & Chapman R (1985). Systematic analysis of language transcripts. Madison, WI: Language Analysis Laboratory. [Google Scholar]
  23. Nicholas JG, & Geers AE (2006). Effects of early auditory experience on the spoken language of deaf children at 3 years of age. Ear and Hearing, 27(3), 286–298. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Niparko JK, Tobey EA, Thal DJ, Eisenberg LS, Wang NY, Quittner AL, Fink NE, & CDaCI Investigative Team. (2010). Spoken language development in children following cochlear implantation. Jama, 303(15), 1498–1506. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Rader ND, & Zukow-Goldring P (2012). Caregivers’ gestures direct infant attention during early word learning: the importance of dynamic synchrony. Language Sciences, 34(5), 559–568. [Google Scholar]
  26. Richards JE (1997). Effects of attention on infants' preference for briefly exposed visual stimuli in the paired-comparison recognition-memory paradigm. Developmental Psychology, 33(1), 22–31. [DOI] [PubMed] [Google Scholar]
  27. Rowe ML (2008). Child-directed speech: relation to socioeconomic status, knowledge of child development and child vocabulary skill. Journal of Child Language, 35(1), 185–205. [DOI] [PubMed] [Google Scholar]
  28. Rowe ML (2012). A longitudinal investigation of the role of quantity and quality of child-directed speech in vocabulary development. Child Development, 83(5), 1762–1774. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Smith LB, Quittner AL, Osberger MJ, & Miyamoto R (1998). Audition and visual attention: the developmental trajectory in deaf and hearing populations. Developmental Psychology, 34(5), 840–850. [DOI] [PubMed] [Google Scholar]
  30. Suanda SH, Smith LB, & Yu C (2016). The Multisensory Nature of Verbal Discourse in Parent–Toddler Interactions. Developmental Neuropsychology, 41(5-8), 324–341. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Tamis-LeMonda CS, Kuchirko Y, & Song L (2014). Why is infant language learning facilitated by parental responsiveness? Current Directions in Psychological Science, 23(2), 121–126. [Google Scholar]
  32. Tincoff R, & Jusczyk PW (1999). Some beginnings of word comprehension in 6-month-olds. Psychological Science, 10(2), 172–175. [Google Scholar]
  33. Tomasello M, & Farrar MJ (1986). Joint attention and early language. Child Development, 57(6), 1454–1463. [PubMed] [Google Scholar]
  34. VanDam M, Ambrose SE, & Moeller MP (2012). Quantity of parental language in the home environments of hard-of-hearing 2-year-olds. Journal of Deaf Studies and Deaf Education, 17(4), 402–420. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Yoshida H, & Smith LB (2008). What's in view for toddlers? Using a head camera to study visual experience. Infancy, 13(3), 229–248. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Yu C, & Smith LB (2012). Embodied attention and word learning by toddlers. Cognition, 125(2), 244–262. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Yu C, & Smith LB (2017). Hand-eye coordination predicts joint attention. Child Development, 88(6), 2060–2078. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Yu C, Suanda SH, & Smith LB (2018). Infant sustained attention but not joint attention to objects at 9 months predicts vocabulary at 12 and 15 months. Developmental science, e12735. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES