Skip to main content
Journal of Speech, Language, and Hearing Research : JSLHR logoLink to Journal of Speech, Language, and Hearing Research : JSLHR
. 2019 May 21;62(6):2002–2008. doi: 10.1044/2019_JSLHR-H-18-0325

Remote Microphone System Use at Home: Impact on Child-Directed Speech

Carlos R Benítez-Barrera a,, Emily C Thompson a, Gina P Angley a, Tiffany Woynaroski a, Anne Marie Tharpe a
PMCID: PMC6808370  PMID: 31112670

Abstract

Purpose

The impact of home use of a remote microphone system (RMS) on the caregiver production of, and child access to, child-directed speech (CDS) in families with a young child with hearing loss was investigated.

Method

We drew upon extant data that were collected via Language ENvironment Analysis (LENA) recorders used with 9 families during 2 consecutive weekends (RMS weekend and no-RMS weekend). Audio recordings of primary caregivers and their children with hearing loss obtained while wearing and not wearing an RMS were manually coded to estimate the amount of CDS produced. The proportion of CDS that was likely accessible to children with hearing loss under both conditions was determined.

Results

Caregivers produced the same amount of CDS when using and when not using the RMS. However, it was concluded that children with hearing loss, on average, could potentially access 12% more CDS if caregivers used an RMS because of their distance from their children when talking to them.

Conclusion

Given our understanding of typical child language development, findings from this investigation suggest that children with hearing loss could receive auditory, speech, and language benefits from the use of an RMS in the home environment.


The adult talk to which children are exposed during early childhood is known to impact their language development and their academic success and cognitive skills later in life (e.g., Ambrose, Walker, Unflat-Berry, Oleson, & Moeller, 2015; Dilley et al., 2018; Hart & Risley, 1995; Romeo et al., 2018). Hearing technologies are intended to increase the likelihood that children with hearing loss have access to a sufficient quantity of clear linguistic input, even under adverse listening conditions (e.g., when background noise is present). Hearing aid and cochlear implant features (e.g., directional microphones, digital noise reduction) are designed to enhance signal-to-noise ratio in acoustically challenging environments that are frequently encountered by children with hearing loss. These device features are effective in providing improved access to the speaker of interest in noisy settings (Ching et al., 2009; Ricketts, Galster, & Tharpe, 2007; Wouters & Vanden Berghe, 2001) and have the potential to facilitate language development in many children with hearing loss (e.g., Fulcher, Purcell, Baker, & Munro, 2012).

Despite these advanced features in current hearing technology, many children with hearing loss still experience greater difficulties than children with normal hearing when listening to speech in noise (e.g., Lewis, Valente, & Spalding, 2015; McCreery et al., 2015) and might also fall short of developing normal or near-normal language and communication skills (e.g., Lund, 2015; Tomblin et al., 2015). One type of technology known to improve access to speech in the presence of background noise is a remote microphone system (RMS; i.e., frequency-modulated or digitally modulated systems). RMSs are an effective technology for children with hearing loss when used in school environments (e.g., Anderson & Goldstein, 2004) and, as a result, are widely recommended by audiologists for use in classroom settings. However, despite the recognized benefits of RMS use in noisy environments and its successful implementation in school settings, they have not been broadly recommended for use in homes.

Past studies have evaluated the feasibility of using RMSs in the home and tested the impact of such systems in home environments on the development of spoken language (Moeller, Donaghy, Beauchaine, Lewis, & Stelmachowicz, 1996) and oral comprehension skills of children with hearing loss (Flynn, Flynn, & Gregory, 2005). The aforementioned studies found that parental reports were, on the whole, quite positive regarding the use of an RMS in home and community settings. Some favorable child outcomes from RMS use were also observed. For example, children's oral language comprehension skills were significantly improved when the home-based RMS was used (Flynn et al., 2005). However, at present, the specific mechanisms by which RMS use in the home might boost child outcomes, such as language comprehension, remain unclear.

Recently, Benítez-Barrera, Angley, and Tharpe (2018) reported promising results related to RMS use in the homes of children with hearing loss. Using Language ENvironment Analysis (LENA) recorders, they examined the impact of RMS home use on the communication of families of preschool children with hearing loss. Results indicated that children with hearing loss in their study could receive access to an average of 42% more caregiver talk when using RMSs than when not. In addition, caregivers talked significantly more from a distance (greater than 6–10 ft from their child) when wearing an RMS than when not. Thus, use of the RMS in the home appeared to provide children with hearing loss access to a larger quantity of caregiver talk, in particular, caregiver talk that might be produced from a distance (i.e., as a caregiver moves about naturally in the home setting).

It is well known that the amount of language to which children are exposed early in life has an impact on language development (e.g., Hart & Risley, 1995). However, not only the quantity but also the quality of language input to which children are exposed plays an important role in language development (Ambrose et al., 2015; Dilley et al., 2018; Hirsh-Pasek et al., 2015; Quittner et al., 2013; Romeo et al., 2018; Szagun & Rüter, 2009; Weisleder & Fernald, 2013; Woynaroski, 2014). Factors that contribute to the quality of language and relate positively to language development include whether caregiver talk is child directed (e.g., Dilley et al., 2018; Weisleder & Fernald, 2013), whether the conversation topics are relevant to the child's focus of attention (e.g., Tomasello & Farrar, 1986), and whether caregivers are responsive to their children during communication exchanges (e.g., Nittrouer, 2010).

Toward that end, the purpose of this study was to extend the findings from our previous work (Benítez-Barrera et al., 2018) by evaluating the quality of spoken language that caregivers produced, and that children with hearing loss could potentially access, with and without an RMS. Specifically, using the recordings from our previous study, we identified the proportion of caregiver talk that was child-directed speech (CDS) when using and not using an RMS in the home environment. The specific aims of the current study were to determine (a) whether an RMS could provide a child with more access to CDS in the home than when not using an RMS, (b) whether caregivers produced a greater proportion of CDS when using an RMS than when not using an RMS, and (c) whether caregivers produced a greater proportion of CDS in their overall talk from a distance when using an RMS than when not using an RMS.

Method

This study draws upon extant data from our prior study on RMS use in young children with hearing loss; detailed methodological information regarding that data collection is available in the prior report by Benítez-Barrera et al. (2018). A brief overview of methods relevant to the present article is provided below.

Data Source

A total of 10 families with children who have hearing loss were included in the original study conducted by Benítez-Barrera et al. (2018). However, LENA recordings from Family 2 were eliminated due to excessive use of electronic devices. 1 Therefore, in this study, only data from nine families were analyzed. Although the entire family was present throughout the study, only one caregiver from each family (herein referred to as the key caregiver) was the object of investigation. Children with hearing loss (key children) ranged in age from 2;6 to 5;3 (years;months; M age = 3;8). All children had permanent bilateral hearing loss ranging from moderate to profound in degree (average better-ear [three-frequency] pure-tone average unaided = 77 dB HL). Some types of hearing technology (hearing aids, cochlear implants, and/or bone-anchored devices [soft band]) had been used for at least 1 year by all children on a full-time basis. Children had never previously used an RMS either at home or at school. None of the children had any diagnosed developmental disabilities other than language delays. Six children used English as the primary language at home; the other three children came from primarily Spanish-speaking households.

Families participated in the study for two consecutive weekends (comprising 2 days each, Saturday and Sunday). During one of the weekends, families used a Phonak Roger RMS, provided by the research team for use during the study. Roger is a digitally modulated wireless technology that includes a transmitter (worn by the speaker, in this case, the key caregiver) and a receiver (worn by the listener, in this case, the key child). The system allows for reliable broadband audio broadcast of key caregiver talk directly to the child's hearing device. During the other weekend, the RMS was not used. To control for a possible novelty effect, families used the RMS at home for the three nights immediately prior to the RMS weekend (Wednesday, Thursday, and Friday). Weekends were counterbalanced for RMS use across families.

The families' home language environment was recorded during both RMS and no-RMS weekends using digital LENA (Xu, Yapanel, & Gray, 2009) recorders. The average length of recordings across families was 14 hr for each weekend (7 waking hours per day for 4 days; range: 6–10 hr per day). LENA allows for automated measurement and analysis of large quantities of data (i.e., daylong audio recordings) collected in natural settings (Oller et al., 2010). Typically, the child of interest wears a LENA recorder in the chest level pocket of a t-shirt designed by LENA. The LENA recorder captures language and other environmental sounds produced within approximately a 6- to 10-ft radius throughout the day (Oller et al., 2010). Based on acoustic parameters of sound segments in the audio recording, LENA automatically labels and quantifies adult talk produced near the child's recorder, yielding estimated male adult near (MAN) and female adult near (FAN) word counts, as well as the amount of time in minutes that each category is represented in the audio recording or in preselected segments of the audio recording (and much more data relevant to the ambient language environment).

In this study, on each day (both days of RMS and no-RMS weekends), two different LENA recorders were used simultaneously. One of these recorders was worn by the key child (key child's recorder), whereas the other was worn by the child's key caregiver (key caregiver's recorder) on a chest strap. Recording with two recorders simultaneously allowed us to quantify CDS produced by the key caregiver near the child (i.e., CDS that is likely accessible to a child with hearing loss even when they are not using an RMS), CDS produced by the key caregiver far from the child (i.e., CDS that is likely only accessible to a child with hearing loss via an RMS), and all CDS produced by the key caregiver within the home setting (i.e., near and far CDS). Derivation of variables indexing near, far, and all CDS categories is described more fully below.

Additionally, during each study day (both days of RMS and no-RMS weekends), families completed a daily log in which they reported information regarding activities occurring in the home, as well as people present in the home and the use of electronic devices. In this study, we analyzed selected segments of the aforementioned LENA recordings that were previously collected by Benítez-Barrera et al. (2018). Further information about segment selection for this extant data analysis is provided below.

Data Reduction and Variable Derivation

Uploading and Processing of LENA Files

LENA software was used to upload and synchronize 2 audio recordings from LENA recorders worn by key children and key caregivers (average length = 14 hr for both key child and key caregiver LENAs for both the RMS and no-RMS weekends), to label all acoustic events within audio recordings (i.e., as MAN, FAN, and seven other categories including key child, other children, noise, silence, fuzz, television and electronics, and unclear), and to parse audio recordings into 5-min segments. Although LENA software is capable of reliably identifying adult talk in the audio recordings, it is not capable of distinguishing CDS from non-CDS. Therefore, conventional (human) coding was used to address the aims of the study, as detailed below.

Determining Whether Key Caregivers Were Near Versus Far From the Key Child

LENA labels were subsequently utilized to determine whether key caregivers were likely near versus far from the child during each 5-min segment of the audio recordings. To do this, audio recordings from the key child's LENA recorder and the key caregiver's LENA recorder from the same study day and condition were paired. We subsequently calculated the discrepancy between the number of seconds each LENA category (FAN, MAN, key child, other children, television and electronics, noise, silence, and fuzz) was represented in audio recordings from key child and key caregiver LENA recorders within each 5-min audio recording segment. Segments were then categorized as being near versus far based on the similarity of the acoustic environment between key child and key caregiver audio recordings. Specifically, segments were categorized as likely capturing times when a key child and a key caregiver were “near” one another when the discrepancy between the number of seconds that all LENA categories represented in audio recordings from the child and caregiver recorders ranged between 0% and 15% (i.e., when there was high correspondence between child and caregiver audio recordings). Segments were categorized as likely reflecting times when the key child and key caregiver were “far” from one another when the discrepancy between the number of seconds that all LENA categories represented in audio recordings from the child and caregiver audio recordings ranged between 25% and 100% (i.e., when there was low correspondence between child and caregiver audio recordings).

Selecting Near and Far Segments for Coding of CDS

From each family's (key child's and key caregiver's) audio recordings, we randomly selected seven 5-min segments per day per distance category (near distance and far distance) for each study condition (RMS and no RMS) to be coded for CDS analyses. To qualify for inclusion in analyses, 5-min segments (a) had to occur ≥ 15 min after the onset of the recording (to increase the likelihood that families had adapted to the new technology and returned to their typical daily routine), (b) had to contain a minimum of 25 key caregiver words as measured by the LENA adult word count per segment (to increase the likelihood that the 5-min segment contained sufficient key caregiver speech to be codeable as CDS or no CDS), (c) could not immediately follow a previously chosen qualifying segment (to ensure that coded segments were distributed across the day; when consecutive segments were selected, the latter-selected segment was removed, and another random selection was made), and (d) could not be a segment in which the caregiver was talking on the phone for > 3 min (to maximize the likelihood of capturing typical caregiver talk that is potentially accessible to the child; segments for which this was the case were replaced as detailed above). The number of segments to be coded was based on preliminary analyses that suggested seven 5-min segments per day per distance category (near distance and far distance) for each study condition (RMS and no RMS) per family were needed to obtain acceptably stable estimates of CDS (g values ≥ .8; Yoder & Symons, 2010).

Coding of CDS Within Near and Far Segments

Trained coders subsequently used a 5-s partial interval coding system to code for the presence/absence of CDS within selected segments (28 total 5-min segments per family, seven each for near and far distance categories in RMS and no-RMS conditions), using the time-stamped, exported .wav file from the key caregiver's LENA (which contained the clearest audio of the adult talk across near and far, RMS and no-RMS conditions). In a first pass, each 5-s interval was reviewed and coded for the presence of key caregiver speech (Caregiver Speech-Yes) at any time during the interval. For each interval in which the presence of key caregiver speech was observed, coders then coded for (a) the presence of any key caregiver words that were directed only to the key child (CDS code), (b) the presence of any key caregiver words that were directed to a group of people that included the key child (group-CDS code), (c) no key caregiver words that were clearly directed to the key child or a group including the key child (no-CDS code), or (d) unclear whether any key caregiver words were directed to the key child or a group including the key child based on the context of the audio recording (unclear code). For all aims, the dependent variable was defined as the percentage of 5-s intervals including caregiver speech directed to the child or to a group of people that included the target child (intervals with CDS + group-CDS codes) over the total number of 5-s intervals containing key caregiver talk (intervals with Caregiver Speech-Yes codes) multiplied by 100.

A number of steps were taken to facilitate the coding process. Coders were provided with a contextual information document that described the ongoing activity and people present in the home during a given 5-min segment. This contextual information was compiled by the primary investigator using information from the daily log that families completed during the course of the study and information gathered by a preliminary review of each 5-min segment. Prior to coding each 5-min segment, coders were additionally familiarized with the voices of each of the participants of interest (key caregiver and key child) to promote easier recognition of the target participants during the CDS coding process. Vocal samples from each speaker from each family were taken from a nonqualifying 5-min segment in which the key caregiver and the key child were having a conversation. Finally, just prior to coding each 5-min segment, coders reviewed the immediately preceding 3 min of the audio recording to obtain additional context relevant to the language environment. Coders were blind to the study condition (RMS or no RMS) and distance category (near or far distance) of each 5-min segment.

Coder Training

Coders were four university students (two monolingual, native English speakers and two bilingual English and Spanish native speakers who coded audio recordings from English- and Spanish-speaking families, respectively). Prior to coding segments to be included in analyses, coders reviewed a detailed coding manual that contained relevant operational definitions and detailed instructions about the coding process and the computer software (Procoder DV) to be used in coding (Tapp & Walden, 1993). All coders then met with the principal investigator to review the guidelines provided in the manual and to address any questions. Coders were then trained using master-coded 5-min practice segments (that were not to be included in analyses). Segments were coded until the desired level of reliability (≥ .8 small-over-large agreement for the percentage of intervals containing CDS, group CDS, and no CDS) was attained for two consecutive files.

Tracking Interrater Reliability

Primary coders for each coding team (English, Spanish) coded all segments per family; reliability coders coded a randomly selected 30% of the total number of segments per family. Primary coders were unaware of which segments were selected for reliability coding. Interobserver agreement (IOA) was tracked using MOOSES, computer software specifically designed for this purpose (see Tapp, 2015). We intended to resolve discrepancies via discussions with the principal investigator and the creation of consensus files, in the event that IOA for any file was < 80% for any variable to be included in analyses; however, IOA never dropped below this a priori threshold for acceptable agreement during the coding process. Interrater reliability across segments randomly selected for coding was quantified by intraclass correlation coefficients. Intraclass correlation coefficients obtained for RMS close, RMS far, RMS all, no RMS close, no RMS far, and no RMS all were between .989 and .997. These values indicate interrater reliability achieved with this coding system was excellent.

Results

Potential Effect of RMS Use on Child Access to CDS

The first aim of the study was to determine whether use of an RMS could provide a child with hearing loss with access to more caregiver CDS in the home than when not using an RMS. Only data from the no-RMS weekend were used to analyze this aim, as this condition represented a typical weekend with no effects of new technology present in the home. First, we calculated the percentage of all CDS relative to the total amount of key caregiver talk as captured in the caregiver's LENA. This percentage was calculated by dividing the total number of 5-s intervals containing CDS (inclusive of intervals with CDS and group-CDS codes) by the total number of 5-s intervals containing any key caregiver talk (CDS, group CDS, no CDS, and unclear labels). We then calculated the percentage of all CDS that was produced near and far from the key child and tested whether all CDS that would potentially be accessible to a child via an RMS was significantly greater than the percentage of CDS that was produced near the child and likely accessible without the use of an RMS.

On average, across families, 57% of the total caregiver talk was child directed. Of all child-directed caregiver talk, 45% was produced near the child, and 12% was produced from a far distance (see Figure 1). A paired-samples t test revealed that the percentage of all CDS was significantly greater than the percentage of near CDS, t(8) = 5.45, p < .05 (d = 1.82; see Figure 1), suggesting that a significantly greater amount of CDS produced by the key caregiver could be accessible to children with hearing loss by using an RMS in the home.

Figure 1.

Figure 1.

Percentage of intervals with child-directed speech (CDS) as coded from the key caregiver's LENA for the “near distance” (reflecting caregiver talk likely accessible to the child without the remote microphone system [RMS]) and “all CDS” (reflecting caregiver talk likely accessible to the child via the RMS) categories. Individual data and average data are displayed. *p < .05.

Effect of RMS Use on Caregiver CDS Production

The second aim of the study was to determine whether there was a difference between the percentage of all CDS produced by key caregivers when using an RMS than when not using an RMS (i.e., to test whether the use of RMS technology in the home affected a caregiver's tendency to direct speech to their child with hearing loss). We calculated the percentage of all CDS produced by the key caregiver in both study conditions (RMS and no RMS), as captured in the caregiver's LENA. A paired-samples t test revealed no difference between the mean percentage of CDS produced by caregivers in the no-RMS and RMS weekends (57% and 55%, respectively), t(8) = 0.77, p > .05 (d = 0.25).

Effect of RMS Use on Caregiver CDS Production From a Distance

The third aim of the study was to determine whether caregivers produced a greater percentage of CDS in their overall talk a far distance from the child when using an RMS than when not. As indicated above, we calculated the percentage of far CDS produced by the key caregiver in both study conditions (RMS and no RMS), as captured by the key caregiver's LENA. A paired-samples t test revealed no difference between the percentage of CDS produced by caregivers in the far distance category for the no-RMS and RMS weekends (12% and 18%, respectively), t(8) = 0.93, p > .05 (d = 0.31).

Discussion

Benítez-Barrera et al. (2018) demonstrated that children with hearing loss had access to more caregiver talk coming from a distance when using an RMS in the home than when not. However, it was not clear whether the additional caregiver talk made accessible via this technology was relevant or directed to the child. Others have reported that not only quantity but also quality of linguistic input is important to the development of language and communication skills in children with and without hearing loss (e.g., Ambrose et al., 2015; Dilley et al., 2018; Hart & Risley, 1995; Hirsh-Pasek et al., 2015; Quittner et al., 2013; Romeo et al., 2018; Szagun & Rüter, 2009; Weisleder & Fernald, 2013; Woynaroski, 2014). Therefore, this study sought to evaluate the quality of the talk produced by caregivers from the recordings obtained by Benítez-Barrera et al.

For purposes of this study, quality of linguistic input was defined as caregiver talk directed to the child or CDS. The recordings of nine families of children with hearing loss revealed that, on average, 57% of the talk that caregivers produced during a typical weekend at home was CDS, and 12% of that CDS was produced from greater than 6–10 ft away from the child. This 12% of CDS produced at a relatively far distance from the child reflects high-quality adult talk that would likely only be heard by the child if using an RMS. This large effect represents a substantial increase in access to meaningful, child-relevant caregiver talk for preschoolers with hearing loss with the use of this novel technology.

It is notable as well that the use of the RMS in the home did not significantly impact caregivers' generalized tendency to direct speech to their children. Caregivers produced a similar proportion of CDS regardless of whether they were or were not wearing the RMS transmitter. Thus, management of the RMS technology did not appear to change the manner in which caregivers interacted with or talked to their child, aside from allowing them to communicate and engage with their child from a broader range of distances throughout the home setting (as previously reported by Benítez-Barrera et al.). The system simply made it more likely that the child had clear access to the linguistic input that their caregivers provided during such interactions. These findings collectively increase support for the use of RMS in the home settings of children with hearing loss.

This study is not without limitations. The methods used in collecting the recordings required that certain constraints be made on families during recording weekends. These restrictions limited the number of willing and qualifying participating families. As a result, the sample size in this study is small. Therefore, we were only powered to detect large effects (such as the between-conditions difference for the amount of CDS potentially accessible with and without the use of an RMS). The other effect sizes for differences of interest between RMS and no-RMS conditions were negligible to small in magnitude. As a result of the small and relatively homogenous sample, we additionally cannot be certain that the children in this study were representative of the larger population of children with hearing loss.

Additionally, one of the enrollment restrictions required that families limit the presence of other adults in the home during study weekends to ensure that the LENA technology could differentiate the key caregiver from other adults. The extent to which caregivers use CDS might differ under less restricted conditions. Furthermore, the extent to which children with hearing loss could potentially have access to additional talk (CDS or otherwise) when others are present in the home cannot be ascertained from this investigation. For example, other caregivers, adults, or children present in the home could talk directly to the child with hearing loss via an additional RMS transmitter and that option was not captured by this study. Future work with larger and more diverse participant samples that explore additional mechanisms by which RMS use could impact children's access to talk and broader development could provide important insights into the effects of this technology in everyday settings. Finally, although families used the RMS for 3 days prior to the RMS weekend, there was still a risk of a novelty effect from using a new technology for the first time. This novelty effect might have altered the communication between caregiver and child.

Conclusions

Results of this study extend findings from Benítez-Barrera et al. (2018) to show that children with hearing loss could have access not only to more caregiver talk but also, more specifically, to more CDS when using an RMS in their homes. Moreover, caregivers produce a similar proportion of CDS from a distance when they are and are not using the RMS, suggesting that the use of this novel system does not adversely affect the quality of caregiver talk in the home (it is notable that this CDS would likely not be heard, or not be heard clearly, by children with hearing loss when they are not using an RMS). Collectively, results from these two studies on home-based use of RMS technology suggest that there are communication benefits related to its use in the home environment that could translate to enhanced language learning in children with hearing loss.

Acknowledgments

Support for this study was provided by Phonak AG, Phonak LLC, National Institutes of Health Grant U54HD083211 (awarded to Neul), and by Vanderbilt Clinical and Translational Science Award KL2TR000446 (awarded to T. Woynaroski) from the National Center for Advancing Translational Sciences. We thank Melanie Schuele and Rene Gifford for their collaboration on this project. We also want to thank Kim Coulter from Language ENvironment Analysis Research Foundation for her support as a consultant and all the coders (Andrea Vargas, Maureen Virts, Paula Zamora, and Meghan Kappelman) for their assistance. Finally, we are grateful to all the families who participated in the study and to the staff at the Mama Lere Hearing School at Vanderbilt for their assistance in recruitment. The contents of this article are solely the responsibility of the authors and do not necessarily represent the official views of the National Center for Advancing Translational Sciences or the National Institutes of Health.

Funding Statement

Support for this study was provided by Phonak AG, Phonak LLC, National Institutes of Health Grant U54HD083211 (awarded to Neul), and by Vanderbilt Clinical and Translational Science Award KL2TR000446 (awarded to T. Woynaroski) from the National Center for Advancing Translational Sciences.

Footnotes

1

Family inclusion criteria included willingness to spend at least 6 waking hours a day with their child at home and to avoid use of sound-producing devices (smartphones and tablets) during the weekends. Tablet or smartphone restrictions were implemented because of the likelihood of LENA labeling speech coming from these devices as human talk. Family 2 was found to have an excessive amount of electronic device use that produced less than the required 6 hr of clean data per day needed for analyses.

2

Synchronization was accomplished by using the LENA Advanced Data Extractor tool to export time-stamped data, tagged for real clock time, and customized for export with conversion to local time, from interpreted time segment files from key child and key caregiver recorders.

References

  1. Ambrose S. E., Walker E. A., Unflat-Berry L. M., Oleson J. J., & Moeller M. P. (2015). Quantity and quality of caregivers' linguistic input to 18-month and 3-year-old children who are hard of hearing. Ear and Hearing, 36(Suppl. 1), 48S–59S. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Anderson K. L., & Goldstein H. (2004). Speech perception benefits of FM and infrared devices to children with hearing aids in a typical classroom. Language, Speech, and Hearing Services in Schools, 35(2), 169–184. [DOI] [PubMed] [Google Scholar]
  3. Benítez-Barrera C. R., Angley G. P., & Tharpe A. M. (2018). Remote microphone system use at home: Impact on caregiver talk. Journal of Speech, Language, and Hearing Research, 61(2), 399–409. [DOI] [PubMed] [Google Scholar]
  4. Ching T. Y., O'Brien A., Dillon H., Chalupper J., Hartley L., Hartley D., … Hain J. (2009). Directional effects on infants and young children in real life: Implications for amplification. Journal of Speech, Language, and Hearing Research, 52(5), 1241–1254. [DOI] [PubMed] [Google Scholar]
  5. Dilley L., Wieland E., Lehet M., Arjmandi M. K., Houston D., & Bergeson T. (2018). Quality and quantity of infant-directed speech by maternal caregivers predicts later speech-language outcomes in children with cochlear implants. The Journal of the Acoustical Society of America, 143, 1822 Retrieved from https://asa.scitation.org/doi/abs/10.1121/1.5035984 [Google Scholar]
  6. Flynn T. S., Flynn M. C., & Gregory M. (2005). The FM advantage in the real classroom. Journal of Educational Audiology, 12, 37–44. [Google Scholar]
  7. Fulcher A., Purcell A. A., Baker E., & Munro N. (2012). Listen up: Children with early identified hearing loss achieve age-appropriate speech/language outcomes by 3 years-of-age. International Journal of Pediatric Otorhinolaryngology, 76(12), 1785–1794. [DOI] [PubMed] [Google Scholar]
  8. Hart B., & Risley T. R. (1995). Meaningful differences in the everyday experience of young American children. Baltimore, MD: Brookes. [Google Scholar]
  9. Hirsh-Pasek K., Adamson L. B., Bakeman R., Owen M. T., Golinkoff R. M., Pace A., … Suma K. (2015). The contribution of early communication quality to low-income children's language success. Psychological Science, 26(7), 1071–1083. [DOI] [PubMed] [Google Scholar]
  10. Lewis D. E., Valente D. L., & Spalding J. L. (2015). Effect of minimal/mild hearing loss on children's speech understanding in a simulated classroom. Ear and Hearing, 36(1), 136–144. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Lund E. (2015). Vocabulary knowledge of children with cochlear implants: A meta-analysis. Journal of Deaf Studies and Deaf Education, 21(2), 107–121. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. McCreery R. W., Walker E. A., Spratford M., Oleson J., Bentler R., Holte L., & Roush P. (2015). Speech recognition and parent-ratings from auditory development questionnaires in children who are hard of hearing. Ear and Hearing, 36(Suppl. 1), 60S–75S. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Moeller M. P., Donaghy K. F., Beauchaine K. L., Lewis D. E., & Stelmachowicz P. G. (1996). Longitudinal study of FM system use in nonacademic settings: Effects on language development. Ear and Hearing, 17(1), 28–41. [DOI] [PubMed] [Google Scholar]
  14. Nittrouer S. (2010). Early development of children with hearing loss. San Diego, CA: Plural. [Google Scholar]
  15. Oller D. K., Niyogi P., Gray S., Richards J. A., Gilkerson J., Xu D., … Warren S. F. (2010). Automated vocal analysis of naturalistic recordings from children with autism, language delay, and typical development. Proceedings of the National Academy of Sciences of the United States of America, 107(30), 13354–13359. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Quittner A. L., Cruz I., Barker D. H., Tobey E., Eisenberg L. S., Niparko J. K., & Childhood Development after Cochlear Implantation Investigative Team. (2013). Effects of maternal sensitivity and cognitive and linguistic stimulation on cochlear implant users' language development over four years. The Journal of Pediatrics, 162(2), 343.e3–348.e3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Ricketts T., Galster J., & Tharpe A. M. (2007). Directional benefit in simulated classroom environments. American Journal of Audiology, 16(2), 130–144. [DOI] [PubMed] [Google Scholar]
  18. Romeo R. R., Leonard J. A., Robinson S. T., West M. R., Mackey A. P., Rowe M. L., & Gabrieli J. D. E. (2018). Beyond the 30-million-word gap: Children's conversational exposure is associated with language-related brain function. Psychological Science, 29(5), 700–710. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Szagun G., & Rüter M. (2009). The influence of parents' speech on the development of spoken language in German-speaking children with cochlear implants. Revista de Logopedia, Foniatría y Audiología, 29(3), 165–173. [Google Scholar]
  20. Tapp J. (2015). MOOSES for Windows (Version 4.8.8.0). Retrieved from http://mooses.vueinnovations.com
  21. Tapp J., & Walden T. (1993). PROCODER: A professional tape control, coding, and analysis system for behavioral research using videotape. Behavior Research Methods, Instruments, & Computers, 25(1), 53–56. [Google Scholar]
  22. Tomasello M., & Farrar M. J. (1986). Joint attention and early language. Child Development, 57(6), 1454–1463. [PubMed] [Google Scholar]
  23. Tomblin J. B., Harrison M., Ambrose S. E., Walker E. A., Oleson J. J., & Moeller M. P. (2015). Language outcomes in young children with mild to severe hearing loss. Ear and Hearing, 36(Suppl. 1), 76S–91S. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Weisleder A., & Fernald A. (2013). Talking to children matters: Early language experience strengthens processing and builds vocabulary. Psychological Science, 24(11), 2143–2152. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Wouters J., & Vanden Berghe J. (2001). Speech recognition in noise for cochlear implantees with a two-microphone monaural adaptive noise reduction system. Ear and Hearing, 22(5), 420–430. [DOI] [PubMed] [Google Scholar]
  26. Woynaroski T. (2014). The stability and validity of automated vocal analysis in preschoolers with autism spectrum disorder in the early stages of language development (Doctoral dissertation). Vanderbilt University, Nashville, TN. [Google Scholar]
  27. Xu D., Yapanel U., & Gray S. (2009). Reliability of the LENATM language environment analysis system in young children's natural home environment. Retrieved from http://www.lenafoundation.org/TechReport.aspx/Reliability/LTR-05-2
  28. Yoder P., & Symons F. (2010). Observational measurement of behavior. New York, NY: Springer. [Google Scholar]

Articles from Journal of Speech, Language, and Hearing Research : JSLHR are provided here courtesy of American Speech-Language-Hearing Association

RESOURCES