Skip to main content
Developmental Cognitive Neuroscience logoLink to Developmental Cognitive Neuroscience
. 2019 Aug 8;39:100701. doi: 10.1016/j.dcn.2019.100701

Maternal speech shapes the cerebral frontotemporal network in neonates: A hemodynamic functional connectivity study

Mariko Uchida-Ota a,b, Takeshi Arimitsu c, Daisuke Tsuzuki d, Ippeita Dan e, Kazushige Ikeda c, Takao Takahashi c, Yasuyo Minagawa a,f,
PMCID: PMC6969365  PMID: 31513977

Highlights

  • Functional connectivity in response to speech in neonates was examined with fNIRS.

  • We compared frontotemporal networks for processing maternal and stranger speech.

  • Frontotemporal connectivity and brain activity were stronger for maternal speech.

  • Maternal speech enhanced left frontotemporal connections within language networks.

  • Frontotemporal network may be initially fostered by neonatal mother speech.

Keywords: Neonate, Mother’s voice recognition, Familiarity, Functional near-infrared spectroscopy, Frontotemporal network, Phase-Locking value

Abstract

Language development and the capacity for communication in infants are predominantly supported by their mothers, beginning when infants are still in utero. Although a mother’s speech should thus have a significant impact on her neonate’s brain, neurocognitive evidence for this hypothesis remains elusive. The present study examined 37 neonates using near-infrared spectroscopy and observed the interactions between multiple cortical regions while neonates heard speech spoken by their mothers or by strangers. We analyzed the functional connectivity between regions whose response-activation patterns differed between the two types of speakers. We found that when hearing their mothers’ speech, functional connectivity was enhanced in both the neonatal left and right frontotemporal networks. On the left it was enhanced between the inferior/middle frontal gyrus and the temporal cortex, while on the right it was enhanced between the frontal pole and temporal cortex. In particular, the frontal pole was more strongly connected to the left supramarginal area when hearing speech from mothers. These enhanced frontotemporal networks connect areas that are associated with language (left) and voice processing (right) at later stages of development. We suggest that these roles are initially fostered by maternal speech.

1. Introduction

Language acquisition in human infants shows incredible development in the first year of life. Evidence from developmental psychology indicates that early language and communicative development is chiefly supported by primary caretakers, who in many cases are the mothers. Examples have included phoneme perception in infants adjusted to maternal articulation (Cristià, 2011), advanced prosodic perception of bilingual newborns whose mothers spoke two languages during pregnancy (Abboub et al., 2016), advantages in facial-emotional recognition (Montague and Walker-Andrews, 2002), and word learning (Barker and Newman, 2004). One-month-old infants (Mehler et al., 1978) and neonates (DeCasper and Fifer, 1980) can discriminate their mothers’ voices from an unfamiliar female voice. Additionally, the fetus is constantly exposed to its mother’s speech; vocal sounds and vibrations are conducted through the intrauterine environment to stimulate developing auditory neural pathways (May et al., 2011), enabling the fetus to specifically respond to its mother’s speech. In fact, bilateral auditory cortex in the temporal lobes of preterm newborns at the gestational age of 25–32 weeks becomes thicker due to exposure to the speech sounds and heartbeat of the mother than it does in response to environmental noise during the first month after birth (Webb et al., 2015). This suggests that the auditory cortex is more adaptive to maternal sounds than to environmental sounds. Therefore, recognition of the mother’s speech is facilitated, and it becomes established as the most familiar source of vocal stimulation for the neonate.

Several brain regions apart from the auditory cortex have been reported as neuronal substrates underlying the mother’s special role in infant auditory recognition and language development. Compared with an unfamiliar woman’s voice, maternal speech elicits specific event-related potentials (ERPs) in the parietal and frontal areas, as well as the bilateral temporal areas, in neonates and infants (deRegnier et al., 2000; Siddappa et al., 2004; Therien et al., 2004; Purhonen et al., 2005; Beauchemin et al., 2010). A functional magnetic resonance imaging (fMRI) study in 2-month-old infants also reported significant responses to maternal speech in the medial prefrontal cortex (mPFC), orbitofrontal cortex (OFC), amygdala, and left temporal region (Dehaene-Lambertz et al., 2010). Significantly greater activation in mPFC was also shown in 7–9-month-old infants when they heard their mothers produce infant-directed speech (Naoi et al., 2012) and in 6-month-old infants who heard their own names spoken by their mothers (Imafuku et al., 2014). The N400 component, which reflects semantic priming, is observed in the parietal area of 9-month-olds exclusively when word stimuli are spoken by a maternal voice (Parise and Csibra, 2012). Abrams et al. (2016) reported that when hearing their mothers’ speech, the strength of functional connectivity between the temporal region—as a voice-processing circuit (Beauchemin et al., 2010; Grossmann et al., 2010)—and the OFC and nucleus accumbens—as a reward circuit (Haber and Knutson, 2010)— was correlated with scores of social communication skill in 10-year-old children. Thus, maternal speech plays indispensable roles in facilitating language acquisition and social communication skills in infants, and this facilitation is based on activity in several brain regions, including the temporal, frontal, and parietal cortices that are assumed to be interacting with each other.

Despite these studies, little is actually known about whether or how these regions interact when neonates hear their mothers’ speech. From infancy, left and right temporal cortices play different roles; the left temporal region strongly responds to phonologically different sounds (e.g., Peña et al., 2003; Sato et al., 2012; Arimitsu et al., 2011) and the right temporal region strongly responds to prosodic aspects of speech (e.g., Homae et al., 2006; Grossmann et al., 2010; Arimitsu et al.; 2011; for review, see Minagawa-Kawai et al., 2011a). Because neonates are exposed to their mothers’ speech beginning in utero, both temporal regions might process acoustic features of a mother’s speech differently from those produced by another person. Temporal region activation and connectivity with the frontal and parietal regions should differ depending on the familiarity of speech because these brain regions integrate different perceptual information and contribute to higher-level speech processing. Perani et al. (2011) reported that the structural connectivity between the temporal and prefrontal region (a known language-related neural substrate) was detected in neonates by tracking fibers using diffusion tensor imaging (DTI). The authors simultaneously reported that the functional connectivity between these regions while hearing normal speech is not fully mature in neonates. However, the vocal stimulus they used in their study was not the maternal voice.

Consequently, the first aim of the present study was to use functional near-infrared spectroscopy (fNIRS) to examine with high spatial accuracy the cortical regions in neonates that respond to the maternal speech. The second aim was to characterize any changes in functional connectivity that might be induced by the maternal voice. We compared cortical responses to maternal speech with those in response to an unknown female speech. We hypothesized that the maternal speech would activate a stronger cortical network between the temporal and frontal regions of the neonatal brain. Our reasoning for this hypothesis is that the maternal speech is continually presented to the fetus in utero and the familiar phonetic and prosodic features of this speech might more readily trigger higher-level speech processing in language areas of the brain.

2. Methods

2.1. Participants

The participants were 37 term neonates (20 females and 17 males) with normal hearing, which was assessed using auditory brainstem responses or other clinical tests. Their mean age was 4.5 ± 1.4 days (range: 2–7; 20 participants were 4 days old) and their mean gestational age was 39.3 ± 1.2 weeks (range: 37–41). Mean birth weight was 3097 ± 267 g (range: 2628–3676 g). All of the mothers were monolingual native Japanese speakers. Written informed consent was obtained from parents before participation. The study was approved by the ethics committee of Keio University Hospital (No. 20090189).

2.2. Stimuli and procedures

The experiments were performed in a testing room at Keio University Hospital. The experiment had two conditions based on differing levels of stimulus familiarity: the auditory stimuli spoken by the neonate’s mother were familiar to the neonate, while those spoken by another participant’s mother (a stranger) were unfamiliar. Stimuli were sampled at 44 kHz (16 bit) by a digital voice recorder and used as natural speech stimuli without any low-pass filtering. All speech stimuli comprised 18 short sentences from a Japanese original script that had rich intonations characteristic of infant-directed speech (Cooper and Aslin, 1990). When these stimuli were recorded, the speakers were asked to speak clearly with a high overall pitch, wide pitch excursions, slow tempo, and exaggerated emphatic stress. Different stranger-voice stimuli were used across participants to ensure that any observed effects were related to their unfamiliarity and not any specific acoustic characteristics. For example the mother’s voice for baby-A was used as the stranger’s voice for baby-B and the mother’s voice for baby-B was used as the stranger’s voice for baby-C. Acoustic parameters for each stimulus (utterance duration, intensity, and fundamental frequency) did not differ significantly across speech (see Table S1). Stimuli were presented to neonates via two speakers positioned 45 cm from their heads. Stimuli were presented in a block design such that each trial comprised a silent period (10 s) followed by a stimulation period (15 s). Thus, each trial lasted 25 s. For each stimulation period, 4 or 5 sentences (4.5 sentences in average) were presented with a pause between the sentences. The average duration of a single sentence was 2285 ms (SD: 753 ms).

During an experimental session, we randomly presented trials from the two speech conditions (mother and stranger) and terminated the experiment when at least five trials of each condition succeeded without gross movement of the neonate’s head or body (see section 2.4 for details on judging body movement artefacts). We recorded changes in regional cerebral hemoglobin (Hb) concentration using a NIRS system (ETG-4000, Hitachi Medical Corporation, Tokyo, Japan) while each neonate was exposed to the speech stimuli while lying in a supine position. Light beams of 695- and 830-nm wavelengths were emitted from each probe with a maximum intensity of 1.5 mW. The transmitted light was sampled at 10 Hz by the detecting probes. For the participants (n = 18) in the latter half of this study, we were able to perform the simultaneous recording of NIRS using electroencephalography (EEG), electrooculography (EOG), electrocardiography (ECG), and respiratory chest movements using a digital polygraph system (Polymate AP1132; TEAC, Tokyo, Japan), as a modified version of the ethics permission for co-registration measurement was obtained. EEGs were recorded from the Fz and Pz points using in the international 10/20 sensor placement system, and EEG and EOG measurements were used to score sleep states. Sleep states were determined according to the criteria put forth by Anders, Emde, & Parmelle (1971) and Scholle and Schäfer (1999). The ECG and respiratory chest-movement measurements were used for a different purpose in a separate study (Uchida, et al., 2017).

Hb signals were measured at 46 positions on the frontal and temporal regions of the scalp (Fig. 1: See section 2.3 for details regarding the method for mapping these channels). The emitting and detecting probes were separated by 2 cm and arranged in a 3 × 3 or 3 × 5 square lattice. The measurement positions were defined as the midpoint between the emitting and detecting probes. The 3 × 3 holders for the left and right temporal regions were placed so that the midpoint between the positions for measurement channel (Chs) 11 and 12 (or between Chs 23 and 24) corresponded to the T3 (or T4) position in the 10/20 system. The lowest probe row was nearly aligned with the horizontal reference curve (F7-T3-T5 or F8-T4-T6). The 3 × 5 holder for the frontal region was set so that the midpoint between Chs 26 and 27 was placed at Fpz, and the lowest probe row was aligned with the horizontal reference curve (F7-Fp1-Fpz-Fp2-F8). The middle column was aligned along the sagittal reference curve.

Fig. 1.

Fig. 1

Channel locations for Hb signals obtained using NIRS on a size-modified infant brain.

2.3. Estimation of macroanatomical locations

To determine the underlying cortical structures that corresponded to the measurement Chs on the scalp, we used a modified version of the virtual registration method (Tsuzuki et al., 2007; Okamoto and Dan, 2005). This uses MRI template data from a single 12-month-old infant with macroanatomical segmentation and detailed landmarking of scalp structures (Matsui et al., 2014). Specifically, we linearly reduced the size of the infant template based on the head circumference (Fpz-T3-POz-T4-Fpz) of a 12-month-old infant (44.2 cm) and a neonate template that we generated as an average from all participants in this study (34.4 cm). Subsequently, we arranged virtual holders that were the same size as the real probe holders (2 cm inter-optode distances) and allocated them along the references of the 10/20 system on the head surface of this minified infant template, which reproduced the real holder allocation. The given Chs on the head surface were then projected onto the cortical surface of the infant template as shown in Fig. 1. Finally, macro-anatomy of the lateral cortical surface was estimated primarily using the infant template with subsidiary reference to automatic anatomical labeling (AAL, Tzourio-Mazoyer et al., 2002; Matsui et al., 2014). Regions of interest (ROIs) were determined based on the macroanatomical estimation.

These methods are validated by Tsuzuki et al. (2017) who quantified individual and developmental variations in cortical structure among infants ranging in age from birth to 2 years. Specifically, they examined individual variability in the distribution of each macroanatomical landmark position that was projected on the lateral cortical template of a 12-month-old infant (Matsui et al., 2014), which was identical to the template used in the present study. They found that individual variability was smaller than the pitch of the 10/10 system landmarks. They concluded that the 10/10 system (and the 10/20 system) can serve as a robust predictor of macroanatomy estimated from the scalp of infants ranging in age from birth to 2 years. Therefore, linearly reducing the size of the 12-month-old brain in applying the virtual registration method is appropriate for neonate macroanatomical estimation.

2.4. Signal preprocessing

Signal preprocessing and the following averaging analysis and the phase-locking analysis were performed in MATLAB (Math Works Inc., Natick, MA). In particular, signal preprocessing was performed using the platform for optical topography analysis tools (POTATo, version 3.7.2 beta; Hitachi, Ltd, Tokyo, Japan) running on MATLAB. Data from the NIRS system were transformed into changes in oxygenated (oxy-) and deoxygenated (deoxy-) Hb molar concentration (unit: mM·mm). In this transformation, based on the modified Beer–Lambert law (e.g. Maki et al., 1995), the optical path length (L) and absorption coefficients against oxy- and deoxy-Hb (εoxy, εdeoxy) were assumed to be constant. Specifically, the product of L and the differential path length factor was set to 1 because measured L was not available. εoxy and εdeoxy for 695 nm wavelength were respectively set to 0.415 and 1.990 mM−1 cm−1, and those for 830 nm wavelength were respectively set to 1.013 and 0.778 mM−1 cm−1. When the contact between the probes and the scalp was insufficient, the oxy-Hb signals of the neighboring measurement channels constantly varied between low and extremely high. Therefore, we investigated the distribution of the variation (standard deviation: SD) in the oxy-Hb signal within the first 10–25 s of measurement among all channels, and the measurement channels were excluded from the following analyses when the SD was above 95% (> 0.2 mM·mm) or below 5% (< 0.001 mM·mm) of the distribution. The time-continuous Hb signals were band-pass filtered between 0.04 and 0.20 Hz using a zero-phase digital filter. We set the lower limit of the band to 0.04 Hz, because a narrower band was preferable for our subsequent phase-locking analysis. We set the higher limit of the band to 0.20 Hz to detect fast hemodynamic responses with a 2–3 s peak latency. We expected that the hemodynamic response curve would return to baseline within a few seconds after stimulus offset. However, it frequently returned to baseline only a few seconds before the following stimulus onset. Therefore, we used 0.1 s before stimulus onset as the baseline point for evaluating the relative change in Hb signals in response to the stimulus. We segmented the Hb signals into 25 s blocks, which included a pre-stimulation silent time-point 0.1 s before the following stimulus onset, a 15-s stimulus, and a 9.9-s post-stimulation silent period. We visually confirmed any unusually large oxy-Hb signal amplitudes (> 0.3 mM·mm) that occurred when the infants moved their heads slightly during the measurement. We observed an absolute maximum oxy-Hb peak above 0.3 mM·mm in 15.1% of all experimental blocks. These blocks were deemed error blocks, contaminated with body movement artefacts, and discarded. Moreover, data from participants for whom we could not obtain at least four blocks in more than two thirds of the channels were discarded for each condition. In the resulting data shown in Table S2, the mean number of available blocks per Ch across participants did not differ between stimulus conditions (mother’s voice: 6.43 ± 1.75; stranger’s voice: 5.97 ± 1.37; t58 = 1.17, p =  0.25), nor did the mean number of available channels per block (mother’s voice: 42.06 ± 3.21; stranger’s voice: 42.27 ± 2.75; t60 = −0.28, p =  0.78). We analyzed data from at least 25 participants for each condition and for each Ch.

2.5. Averaging analysis

First, we averaged the block data for oxy- and deoxy-Hb signals during stimulus exposure for each participant, channel, and stimulus condition. We considered both oxy- and deoxy-Hb as variables for the following statistical tests, because hemodynamic physiology is complex and it was unclear which measure best represents the neural correlates of particular cognitive function, particularly in young infants, as described in 2.6. Then, we performed two-tailed Wilcoxon’s rank sum tests (α = 0.05) within each stimulus condition of mother’s speech and strangers’ speech to identify the channels in which the mean change in Hb 3.0–14.9 s after stimulus onset was significantly different from those during the 0.1-s pre-stimulation period across participants. Moreover, averaged data across participants underwent two-tailed Wilcoxon’s rank sum testing (α = 0.05) to identify channels in which changes in the Hb signal during the ‘mother’ condition differed significantly from those during the ‘stranger’ condition. In addition, to investigate the hemispheric lateralization of regions with stronger responsiveness to mother’s voice than stranger’s voice, a two-way rank-based robust analysis of variance (ANOVA) test (Hettmansperger and McKean, 2011; Hocking, 1985) was conducted to determine the main effects and interactions between the hemispheric factor (left channels versus right channels) and the voice stimulus factor (mother’s voice versus stranger’s voice). To take multiple comparisons among all channels into account, we used false discovery rate (FDR) correction (q = 0.05) (Benjamini and Hochberg, 1995). The effect size was calculated using the following equation: res=Z/N, where Z and N represent the z-score and sample size, respectively (Field, 2005).

2.6. Phase-locking analysis

We chose channels in which changes in the Hb signal were significantly different between voice conditions as seed ROIs and examined the functional connectivity between each seed and all other channels. We used a phase-based method to investigate the functional connectivity (Lachaux et al., 1999; Tass et al., 1998) and focused on phase-locking (phase synchronization) between the two Hb signals. The reasons for using this phase-based method instead of more general amplitude-based methods such as the general linear model (GLM, e.g., Perani et al., 2011) or dynamic causal modeling (Tak et al., 2015) relate to Hb data characteristics that are unique to neonates. The amplitude-based method requires a hemodynamic response function (HRF) model that is based on physiological neural mechanisms. However, it is difficult to define a good HRF model in infants due to the variability of the hemodynamic response, which involves vasculature, and consequently the neurovascular coupling is immature (Gervain et al., 2011; Arimitsu et al., 2018; Gemignani et al., 2018). In the present study, we could not define an appropriate HRF model for neonatal Hb signals due to variability among different cortical areas and among stimulus conditions (see the Results section). Therefore, we selected a phase-based method requiring no prior knowledge of the shape of the expected hemodynamic response.

The phase-based method has several steps. First, the instantaneous phases, φX(t,i) and φY(t,i), were extracted from the Hilbert transformation of the Hb signals for channel X and channel Y, respectively, for time t in the i-th block. X corresponds to each channel of the seed ROIs, and Y corresponds to each channel other than X. To calculate the phase difference between channels X and Y, we used the equation: θ(t,i) = φX(t,i) - φY(t,i) (see Fig. 2A and B). We used modulus after dividing each phase by 2π to detect the preferred values of θ, irrespective of noise-induced phase slips (Tass et al., 1998). Next, for each Ch-pair of X and Y, we used a statistical test based on surrogate data to judge whether the θ did not vary (phase-locking) during 3.0–14.9 s after stimulus onset. A large amount of θ data for every Ch-pair was needed in this test to obtain the appropriate distribution of θ. However, the number of samples of θ from some stimulus periods per participant was not sufficient, because the sampling period of Hb signal was low (10 Hz) and the stimulus period was short (15 s). Therefore, we collected θ data from all participants for each stimulus condition. Fig. 2C shows an example of the distribution of the actual samples of θ between X = Ch 40 and Y = Ch 5 across all participants in the mother’s speech condition. The surrogate θ˜ data of the same Ch-pair of X and Y were produced by applying the iterated amplitude-adjusted Fourier transform method (Schreiber and Schmitz, 1996) to the Hb signal of channel Y (Fig. 2D). This method enabled us to randomly change θ between X and Y (Fig. 2E). The surrogate distribution of θ˜ was also obtained by collecting data from all participants (Fig. 2F). We calculated ρ as an index based on Shannon entropy (Tass et al., 1998) to test the null hypothesis that samples of θ for the Ch-pair were drawn from a uniform distribution (i.e., the distribution of non-phase-locking data). ρ is defined as: ρ=1-(S/lnM), where S is the Shannon entropy and M is the number of bins in the distribution of θ. The optimal number of M was given by M=exp[0.626+0.4ln(Ms-1)], where Ms denotes the number of samples (Otnes and Enochson, 1972), and M was set to 112. The Shannon entropy is S=-m=1Mp(m)lnp(m), where p(m) is the probability of the m-th bin. ρ = 0 corresponds to a uniform distribution (no phase-locking) and ρ = 1 corresponds to perfect phase-locking across participants. We selected Ch-pairs with higher ρ than the significance level, which corresponded to 99% of the surrogate distribution of ρ given by 200 surrogate data sets. These selected Ch-pairs were interpreted as phase-locking pairs.

Fig. 2.

Fig. 2

Significant synchronization between two channels. (A) Examples of Hb-signal waves for channel X (Ch. 40; solid line) and channel Y (Ch. 5; dashed line) in the i-th block of a participant. (B) Phase difference θ(t,i) between X and Y. (C) The actual distribution of θ for all blocks in the ‘mother’ condition for all participants. (D) An example of surrogate data produced by the phase-randomized Hb-signal wave of Y. (E) Phase difference θ˜ (t,i) between X and surrogate Y. (F) The surrogate distribution of θ˜ for all participants is similar to a uniform distribution.

Next, we sought Ch-pairs where the phase-locking level varied based on the influence of the voice stimulus. First, we calculated the phase-locking value (PLV; Lachaux et al., 1999) of the Ch-pairs. The PLV between channels X and Y at time, t, is given as the length of the mean vector of θ across N blocks: PLV=i=1Nexp(jθt,i)/N, where j denotes the imaginary unit that is used to represent θ as a vector on the unit circle that is defined in the complex plane. PLV is the inter-block variability of θ at t; it is close to 1 if the phase difference varies little (phase-locking) across blocks, and is close to 0 if there is no phase-locking. We calculated PLV using at least four good blocks of data (N ≥ 4) for each participant. For each Ch-pair, we obtained good PLV data from at least 25 neonates. We performed two-tailed Wilcoxon’s rank sum tests (α = 0.05) on individual data to reveal Ch-pairs where PLV between about 3.0 and 14.9 s differed significantly from that for the 0.1-s silent pre-stimulation period. Moreover, individual PLV data underwent two-tailed Wilcoxon’s rank sum tests (α = 0.05) to identify Ch-pairs in which changes in PLV were significantly different between stimulus conditions. We have reported preliminary results for the same participant dataset using a different method of analysis (cross correlation), but at that time we failed to reveal a clear difference between the conditions (Uchida et al., 2015).

3. Results

All participants were asleep during measurements. Their sleep state was judged as ‘active sleep’ because we observed frequent motor activity of limbs and rapid eye movements. Furthermore, EEG and EOG recordings collected from 18 participants showed EEG patterns that were mainly composed of low-voltage irregular and mixed patterns (Anders et al., 1971; Scholle and Schäfer, 1999) and rapid eye movements.

Oxy- and deoxy-Hb values changed significantly across broad areas of the frontal cortex, both sides of temporal cortex, and parts of the motor and somatosensory cortices during the ‘mother’ condition (p <  0.01, res > 0.5; Fig. 3A and Table S3). In contrast, the ‘stranger’ condition yielded significant changes in oxy-Hb only in the right temporal cortex (p <  0.005, res > 0.5; Fig. 3B and Table S3). As shown in Fig. 3C and Table S3, several channels differed significantly between the two stimulus conditions. Compared with the ‘stranger’ condition, the mother’s speech produced greater changes in oxy-Hb in left and central frontal pole (FP; Chs 29 and 40) and right middle temporal gyrus (MTG; Ch 23) (p <  0.001, res > 0.6). Additionally, the mother’s speech also resulted in strong and significant changes in deoxy-Hb in the left inferior/middle frontal gyrus (IFG/MFG; Ch 38) and left precentral/superior temporal gyrus (PrCG/STG; Ch 6) (p <  0.005, res > 0.5). Here, we labeled Chs 38 and 6 as IFG/MFG and PrCG/STG, respectively. We note that the anatomical estimation of Ch 38 by Matsui et al. (2014) included a greater proportion of MFG than IFG, as shown in Table S3. This is because the definition of the IFG-MFG border is intricate and differs depending on the anatomical atlas, e.g., the AAL (Tzourio-Mazoyer et al., 2002), Brodmann (Lancaster et al., 2000), or Matsui et al. (2014). The latter atlas chiefly relies on AAL definitions and includes less IFG in this area than do other atlases.

Fig. 3.

Fig. 3

Cortical areas related to hearing familiar (mother) and unfamiliar (stranger) voices. (A) Channels with significant decreases in oxy-Hb (solid magenta) and increases in deoxy-Hb (dotted cyan) in response to a mother’s voice (compared with the prestimulus period). (B) Channels with significant increases in Hb signals in response to a stranger’s voice (compared with the prestimulus period). (C) Channels with significant differences between conditions. (D) Grand-averaged time courses of oxy-Hb (magenta) and deoxy-Hb (cyan) to a mother’s voice (solid line) and a stranger’s voice (dotted line) for each channel showing significant differences in panel C. Error-ranges between the two thin lines indicate the 95% confidence intervals. Exposure to the vocal stimulus was from time 0 to 15 s. FP, frontal pole; IFG/MFG, inferior/middle frontal gyrus; PrCG, precentral gyrus; STG, superior temporal gyrus; MTG, middle temporal gyrus.

We also found brain regions sensitive to the ‘stranger’ speech. Stronger changes in oxy-Hb occurred in the right superior temporal gyrus (STG; Ch 18) when hearing a stranger’s speech than when hearing a mother’s speech (p =  0.0001, res = 0.76). Grand averaged time courses of the Hb signals for these six channels are shown in Fig. 3D (for all channels see Figure S1). Averaged oxy-Hb and deoxy-Hb tended to decrease and increase, respectively, in distributed cortical areas when hearing the mother’s speech. Conversely, averaged oxy-Hb and deoxy-Hb tended to increase and decrease, respectively, when hearing a stranger’s speech. These amplitude changes in Hb signals were not linearly related to the participants’ ages in days (Table S4).

We also investigated hemispheric lateralization of each region showing significant differences between the two stimulus conditions (Chs 40, 29, 38, 6, 18, and 23; see Fig. 3C). Apart from Ch 40, which was in the medial line and had no contralateral channel, the remaining five channels and their contralateral channels were entered in the two-way rank-based robust ANOVA with hemisphere and voice stimulus as factors for each channel (FP, IFG/MFG, PrCG/STG, STG, and MTG: as region labels of five channels). The results revealed significant main effects of the voice stimulus in all channels, and significant interaction of the two factors was found in only FP (Ch 29 vs 33) (F = 11.227, p <  0.002, res > 0.4). Specifically, the decrease in FP oxy-Hb was significantly greater in the left hemisphere (Ch 29) than in the right hemisphere (Ch 33) during exposure to the mother’s voice.

We again focused six channels (Chs 40, 29, 38, 6, 18, and 23) that showed significant differences between the two voice stimulus conditions, and investigated the strength of connectivity to other channels. Using these Chs as seed ROIs (Chs 29 and 40 as ROI-1 in oxyHb, Ch 38 as ROI-2 in deoxyHb, Ch 6 as ROI-3 in deoxyHb, Ch 18 as ROI-4 in oxyHb, and Ch 23 as ROI-5 in oxyHb; Fig. 3C), we calculated PLV between each seed and the other channels to determine functional connectivity. Fig. 4 (see also Table S5) shows that PLV was significantly higher (significant connectivity) when hearing vocal stimuli than when not hearing anything in the pre-stimulus period (uncorrected p <  0.05, res > 0.5). Overall, mother and stranger speech elicited ipsilateral and contralateral functional connections. However, the mother’s speech elicited broader connections than the stranger’s speech. The central FP (Ch 40, ROI-1) formed a significant connection with the bilateral temporal areas during the ‘mother’ condition, but did not form any significant connections during the ‘stranger’ condition (Fig. 4A, row 1). Connections were formed between bilateral temporal areas and the left IFG/MFG (Ch 38, ROI-2) or left PrCG/STG (Ch 6, ROI-3) during the ‘mother’ condition (Fig. 4A, rows 2 and 3). Particularly, as shown in Fig. 4B, the left IFG/MFG. which covers the anterior language area, had many strong connections with temporal areas including STG and SMG (posterior language area), while this was not the case in the ‘stranger’ condition. The right STG (Ch 18, ROI-4), which exhibited significantly stronger changes in oxy-Hb during the ‘stranger’ condition (Fig. 3D; Table S3), did not actually form any connection with other areas during this condition. However, it did form connections with frontal areas during the ‘mother’ condition (Fig. 4C). The right MTG (Ch 23, ROI-5), which exhibited a significant increase in oxy-Hb during the ‘stranger’ condition and a significant decrease during the ‘mother’ condition (Fig. 3D, row 6), formed broad connections with frontal areas during both stimulus conditions (Fig. 4A, row 5). However, connections extended around the lateral fissure, including the right temporal pole, only in the ‘mother’ condition. Additionally, the central FP and left SMG were more strongly connected during the ‘mother’ condition (Fig. 5; uncorrected p =  0.013, res = 0.31). No PLVs were correlated with infant age (in days) (Kendall’s correlation analysis, p >  0.05, see Table S5).

Fig. 4.

Fig. 4

Spatial functional connectivity maps. (A) Overhead view of significant functional connectivity for seed ROIs in the FP, left IFG/MFG, left PrCG/STG, right STG, and right MTG while hearing a mother’s voice (left column) and a stranger’s voice (right column). Location of each seed ROI is shown by a black dot and an arrow, and the significant connections (p < 0.05) are shown by colored lines (oxy-Hb, magenta; deoxy-Hb, light blue). Connectivity strength is indicated by line thickness. (B) Connectivity with the left IFG/MFG viewed from the left side. (C) Connectivity with the right STG viewed from the front and diagonal right side. SMG: supramarginal gyrus (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article).

Fig. 5.

Fig. 5

Stronger connectivity in response to the mother’s voice than to a stranger’s voice. (A) View from the left side of the connectivity showing significant difference between the ‘mother’ and ‘stranger’ conditions. (B) PLV time courses between the central FP and left SMG while hearing the mother’s voice (thick line) or a stranger’s voice (dotted thin line). * A significant difference between vocal stimulus conditions was detected during this period.

4. Discussion

In this study, maternal speech elicited significant hemodynamic changes in broad areas of the neonatal brain, particularly the frontal and temporal areas. Further functional connectivity analysis revealed that the frontal area synchronized with the bilateral temporal cortices when hearing maternal speech, particularly with the left temporal cortex.

4.1. Maternal speech enhances the left-side frontotemporal network

Left-lateralized responses in the temporal region have frequently been observed in infants for familiar spoken language (e.g., normal vs. reversed speech of a native language, Dehaene-Lambertz et al., 2002; native vs. foreign language, Minagawa-Kawai et al., 2011b; dialect differences, Cristià et al., 2014) as well as in neonates (e.g., normal vs. backward speech, Peña et al., 2003; native vs. foreign languages, Sato et al., 2012; unfamiliar spoken language vs. whistling language, Molavi et al., 2014). Dehaene-Lambertz et al. (2010) reported a significant response to maternal speech not only in the left temporal region of 2-month-olds, but also in the mPFC, OFC, and amygdala. Significantly greater activation in the mPFC was also observed when 7–9-month-old infants heard their mothers’ infant-directed speech (Naoi et al., 2012) and when 6-month-old infants heard their own names spoken by their mothers (Imafuku et al., 2014). The present study was consistent with these previous findings in revealing stronger brain responses in the superior temporal, prefrontal, and precentral regions in response to maternal speech than to non-maternal speech. In particular, compared with baseline, the ‘mother’ condition evoked significant activity in many left channels, specifically in the FP that showed a significant left dominance relative to the contra-lateral channel. In contrast, that was not seen when comparing the ‘stranger’ condition to baseline with no significant activity in the left hemisphere. Thus, language processing appears to be more specialized or facilitated when speech is familiar. This interpretation is supported by previous behavioral and EEG studies reporting the advantage of maternal stimuli for language acquisition in infants (Barker and Newman, 2004; Cristià, 2011; Parise and Csibra, 2012).

Functional connectivity originating in the left prefrontal area (IFG/MFG) spread to broader temporal areas (including the left STG, MTG, and SMG) in the ‘mother’ condition than in the ‘stranger’ condition (Fig. 4B). This leftward connectivity might partially correspond to either of the previously described neural language networks: the dorsal pathway connecting the inferior frontal gyrus via the arcuate fasciculus to the temporal cortex (detected by DTI in infants; Dubois et al., 2009; Leroy et al., 2011) or the ventral pathway connecting the ventral IFG via the extreme capsule to the temporal cortex (detected by DTI in newborns; Perani et al., 2011). However, other than in a resting-state fNIRS study (Homae et al., 2011) in 3-month-olds after having heard native speech spoken by a stranger, no frontotemporal functional connectivity has been reported in the literature. Thus, our current results are the first report of left frontotemporal connectivity for neonates when hearing speech. Moreover, we have demonstrated that long-range functional connectivity between the left temporal area (SMG) and the central FP is stronger in response to a mother’s speech than to a stranger’s speech (Fig. 5). However, this long-range functional connection was barely detectable in newborns (Zhang et al., 2007; Perani et al., 2011). This suggests that the neonatal FP might be indirectly connected to the left temporal region.

It seems that exposure to maternal speech in utero might have shaped this frontotemporal network. This idea is supported by the study mentioned above in which one-month exposure to maternal speech was reported to thicken the auditory cortex of infants born extremely prematurely (Webb et al., 2015). Furthermore, the strength of hemodynamic activity and functional connectivity induced by maternal speech in these left regions did not correlate with age (days since birth), as shown in Table S4. Taken together, our results provide some evidence that neonatal left frontotemporal connectivity is enhanced by maternal speech during the fetal period.

4.2. Maternal speech enhances the right frontotemporal network

The right temporal lobe is known to respond dominantly to prosodic differences in 3-month-old infants (Homae et al., 2006) and neonates (Arimitsu et al., 2011). It also responds to changes in voice type (male ⬌ female) in preterm infants at gestational 28–32 weeks (Mahmoudzadeh et al., 2013). In the present study, neonates exhibited significant activity in the right temporal area in response to both stranger and mother speech. This suggests that both types of speech elicited significant activity in the right temporal area because they have acoustic properties of the human voice that are rich in prosodic information. However, the right temporal region (STG) was functionally connected to the FP, including the mPFC and OFC, only in the ‘mother’ condition (Fig. 4C). Because right STG is engaged in voice identification, this network specific to the maternal voice may reflect processing of familiar voices in relation to attention, emotion, or other cognitive functions governed by the frontal area (see below for details). We assume that this right frontotemporal network is organized through exposure of daily maternal speech during pregnancy. This idea is supported by the non-significant correlation between the strength of connectivity and neonate age (days).

In adults, the right temporal region, including the STG, MTG, and temporal pole (TP), is associated with the discrimination of emotional prosody (Zatorre et al., 1992; Sander et al., 2005), discrimination of voices and speaker identification (Belin and Zatorre, 2003; Kriegstein and Giraud, 2004; Nakamura et al., 2010), and recall of social information regarding personal interactions or emotional episodes (McCarthy and Warrington, 1992; Markowitsch, 1995; Olson et al., 2007). The right STG relays information to the TP, which connects to the amygdala and FP (OFC) and is thus involved in emotional processing (Kondo et al., 2003; Liu et al., 2013). In neonates, the cingulate gyrus, which connects to the FP and is involved in emotional processing, exhibits small-world network properties in the right hemisphere (Ratnarajah et al., 2013). Although the present fNIRS study cannot directly visualize these connections deep in the brain due to its spatial limitations, the FP-right STG connectivity found exclusively in the ‘mother’ condition may likely be involved in this small-world network.

4.3. Neural substrates that influence maternal speech-induced infant behavior

A mother’s speech influences her infant’s behavior. Previous behavioral studies that measured non-nutritive sucking in 1-month-old infants (Mehler et al., 1978) and neonates (DeCasper and Fifer, 1980) indicated that infants increase their sucking behavior in response to the mother’s speech to a greater extent than they do in response to the speech of an unknown female. These results were interpreted to suggest that the infant prefers their mother’s speech. However, Moon et al. (2015) did not find a sucking response associated with the mother’s speech for neonates and concluded that neonates are not sufficiently motivated by their mothers’ speech to alter sucking behavior. Taking these behavioral studies into account, in addition to the voice recognition processing in the neonatal brain, we need to consider the motor and motivation processing needed to generate a behavior. We previously used indices of EEG and respiratory rate (the number of breaths taken per minute) to investigate cortical activity associated with the respiratory behavior of neonates when they hear their mother’s speech (Uchida et al., 2017). Several types of changes in respiratory rate were promoted by the mother’s speech, and the amplitude of the EEG delta rhythms in the frontal cortex (Fz) was simultaneously increased. This suggests that the respiratory response to the maternal speech may be associated with the frontal cortex, and this may play a role in the motor and motivation processing that drives respiratory behavior from the neonatal period. The central FP (Ch 40) in the present study was near Fz and functionally connected to the bilateral temporal cortex, including the STG (ROI-3 and -4 in Fig. 4A) and left SMG (Fig. 5A). Similar connectivity that was strengthened by hearing a mother’s speech was reported in children with high social communication skills (Abrams et al., 2016). The authors suggested that the STG is a voice-recognition processing region and that the frontal region is a region involved in reward and affective processing (motivation processing) of familiar sounds. While our results obtained from the frontal cortex do not necessary indicate such motivation processing, future work examining the relationship between FP-STG functional connectivity and respiratory behavior while hearing the maternal voice would reveal such motivation system in neonates.

4.4. Physiology of hemodynamic response to mother’s voice

The physiological mechanisms underlying various hemodynamic response patterns observed in infants is a controversial issue in infant fNIRS and fMRI studies (e.g., Lloyd-Fox et al., 2010; Arimitsu et al., 2015; Issard and Gervain, 2018). Typically, increased oxy-Hb with a slight decrease in deoxy-Hb (i.e., a positive blood-oxygen-level dependent (BOLD) response) is observed, as represented by an HRF model (Peña et al., 2003; Arichi et al., 2010; Liao et al., 2010; Taga et al., 2011). In contrast, a negative BOLD response, characterized by decreased oxy-Hb and increased deoxy-Hb, has also been reported in infants (Yamada et al., 1997, 2000; Meek et al., 1998; Morita et al., 2000; Muramoto et al., 2002; Kusaka et al., 2004). Our results showed the typical HRF pattern for the ‘stranger’ condition and an atypical reversed HRF for the ‘mother’ condition. It has been suggested that factors triggering this variety of hemodynamic response patterns involves the difference between awake and sleep states (Meek et al., 1998; Taga et al., 2003, 2011; Kotilahti et al., 2005) and age (in days from birth; i.e., the amount of time that neonates are exposed to the mother’s voice outside of the womb). However, none of these factors explain the present data. The neonates were in the same sleep state (active sleep) across the stimulus conditions, but showed differential HRFs depending on the vocal stimuli. We also confirmed no correlation between age (in days) and hemodynamic changes for our participants 2 to 7 days after birth (Table S4). Systemic fluctuations evoked by task-related body movements and psychophysiological changes often contaminate fNIRS signals (Yamada et al., 2012) and may cause atypical types of HRF. The effect of systemic fluctuations should be examined in future studies. However, the systemic components may have little effect on the results of the present study, because we discarded blocks containing head and body movement (See 2.4). Task-evoked changes in skin blood flow due to systemic effects can also mask fNIRS signals. However, the contribution of the deep layer including cortical tissues to fNIRS signals in infants during language tasks was 72%, greater than that of the shallow layer including skin tissue (Funane et al., 2016). Therefore, it is likely that the fNIRS signals in the present study mainly reflected cortical hemodynamics rather than systemic hemodynamics.

In general, a typical (adult-like) HRF pattern is thought to mainly reflect increased hyperemia for the delivery of oxygenated blood to the activated brain region, which outstrips the oxygen consumption increase of neural activity (Attwell and Iadecola, 2002; Heeger and Ress, 2002; Sirotin et al., 2009). However, Yamamoto and Kato (2002) executed a language task, reporting that reverse HRF (increased deoxy-Hb) reflected capillary hemodynamics associated with higher oxygen consumption, while typical HRF reflected hemodynamic changes in the large veins. In addition, neonatal responses often show the reverse pattern, associated with the immaturity of hyperemia in neonatal cortical pial arteries (Kozberg et al., 2013), or with a higher oxygen demand for rapid synaptogenesis (Morita et al., 2000; Muramoto et al., 2002). Therefore, these atypical patterns can be regarded as signals reflecting brain activations. A difference in the processing load might contribute to the differential HRF patterns. Specifically, cortical oxygen consumption could increase because of the diverse network synchronization that is evoked by familiar and eventful stimuli like a mother’s speech. In the present study, the stranger’s speech evoked a positive BOLD response and limited or no network synchronization. The right STG (ROI-4), which exhibited a significantly positive BOLD response to the stranger’s speech, was not associated with any region. Conversely, the mother’s speech evoked a negative BOLD response and increased PLVs in more diverse regions in the frontal, left, and right temporal areas when compared with the stranger’s speech. ROI-4, which showed a non-significant weak negative BOLD response to the mother’s speech, connected with a broad area including the frontal cortex (Fig. 4C). Such broad synchronization while hearing the mother’s speech as a familiar stimulus could promote increased oxygen consumption and lead to a negative BOLD signal. Conversely, oxygen consumption could be overwhelmed by hyperemia at the capillary or precapillary arteriole level, without recruiting pial arteries. This would result in positive BOLD responses when hearing a stranger’s speech. The mother’s voice likely elicits a unique HRF pattern as it induces a strong processing load for familiar stimuli.

5. Conclusion

The present fNIRS study revealed an early cortical network enhanced by maternal speech in neonates using a phase synchronization method that was applicable to block-design data. The functional connectivity between the frontal pole and the temporal cortex was strengthened more when neonates heard their mother’s speech than when they heard a stranger’s speech. Frontotemporal connectivity was discussed in relation to the facilitation of the language network in the left hemisphere and in relation to voice identification for the right hemisphere. These results suggest that maternal speech fosters functional frontotemporal circuitry at the beginning of life, and probably contributes to the formation of higher cognitive networks such as language.

Funding

This work was supported by the Global COE (Center of Excellence) program of Keio University and the Japan Society for the Promotion of Science (JSPS) Kakenhi (Grant Nos. 15H01691, 19H05594, 24791123, 24591609, and 15K09725).

Declaration of Competing Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors thank the parents for their cooperation. We also thank K. Kosaki, T. Yagihashi, I. Hokuto, Y. Matsuzaki, M. Miwa, E. Okishio, and all the staff of the neonatal unit of Keio University Hospital, Japan for help with this study. We thank N. Tanaka and H. Sato for technical support and K. Yatabe, S. Ishii, A. Matsuzaki, N. Naoi, M. Hata, M. Arai, H. Osawa, and Y. Hakuno for help conducting the measurements. We also wish to thank A. Phillips, PhD, and Benjamin Knight, MSc., from Edanz Group for editing a draft of this manuscript.

Footnotes

Appendix A

Supplementary material related to this article can be found, in the online version, at doi:https://doi.org/10.1016/j.dcn.2019.100701.

Appendix A. Supplementary data

The following is Supplementary data to this article:

mmc1.docx (365.4KB, docx)

References

  1. Abboub N., Nazzi T., Gervain J. Prosodic grouping at birth. Brain Lang. 2016;162:46–59. doi: 10.1016/j.bandl.2016.08.002. [DOI] [PubMed] [Google Scholar]
  2. Abrams D.A., Chen T., Odriozola P., Cheng K.M., Baker A.E., Padmanabhan A., Ryali S., Kochalka J., Feinstein C., Menon V. Neural circuits underlying mother’s voice perception predict social communication abilities in children. Proc. Natl. Acad. Sci. U.S.A. 2016;113(22):6295–6300. doi: 10.1073/pnas.1602948113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Anders T., Emde R., Parmelle A. UCLA Brain Information Service, NINDS Neurological information network.; Los Angeles, CA: 1971. A Manual of Standardized Terminology, Techniques and Criteria for Scoring of States of Sleep and Wakefulness in Newborn Infants. [Google Scholar]
  4. Arichi T., Moraux A., Melendez A., Doria V., Groppo M. Somatosensory cortical activation identified by functional MRI in preterm and term infants. Neuroimage. 2010;49(3):2063–2071. doi: 10.1016/j.neuroimage.2009.10.038. [DOI] [PubMed] [Google Scholar]
  5. Arimitsu T., Uchida-Ota M., Yagihashi T., Kojima S., Watanabe S. Functional hemispheric specialization in processing phonemic and prosodic auditory changes in neonates. Front. Psychol. 2011;2:202. doi: 10.3389/fpsyg.2011.00202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Arimitsu T., Minagawa Y., Takahashi T., Ikeda K. Assessment of developing speech perception in neonates using near-infrared spectroscopy. NeoReviews. 2015;16(8):e481. [Google Scholar]
  7. Arimitsu T., Minagawa Y., Yagihashi T., Uchida-Ota M., Matsuzaki A. The cerebral hemodynamic response to phonetic changes of speech in preterm and term infants: the impact of postmenstrual age. Neuroimage Clin. 2018;19:599–606. doi: 10.1016/j.nicl.2018.05.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Attwell D., Iadecola C. The neural basis of functional brain imaging signals. Trends Neurosci. 2002;25(12):621–625. doi: 10.1016/s0166-2236(02)02264-6. [DOI] [PubMed] [Google Scholar]
  9. Barker B.A., Newman R.S. Listen to your mother! The role of talker familiarity in infant streaming. Cognition. 2004;94(2):B45–B53. doi: 10.1016/j.cognition.2004.06.001. [DOI] [PubMed] [Google Scholar]
  10. Beauchemin M., González-Frankenberger B., Tremblay J., Vannasing P., Martínez-Montes E. Mother and stranger: an electrophysiological study of voice processing in newborns. Cereb. Cortex. 2010;21(8):1705–1711. doi: 10.1093/cercor/bhq242. [DOI] [PubMed] [Google Scholar]
  11. Belin P., Zatorre R.J. Adaptation to speaker’s voice in right anterior temporal lobe. Neuroreport. 2003;14(16):2105–2109. doi: 10.1097/00001756-200311140-00019. [DOI] [PubMed] [Google Scholar]
  12. Benjamini Y., Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B. 1995;57(1):289–300. [Google Scholar]
  13. Cooper R.P., Aslin R.N. Preference for infant-directed speech in the first month after birth. Child Dev. 1990;61(5):1584–1595. [PubMed] [Google Scholar]
  14. Cristià A. Fine-grained variation in caregivers’ /s/ predicts their infants’ /s/ category. J. Acoust. Soc. Am. 2011;129(5):3271–3280. doi: 10.1121/1.3562562. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Cristià A., Minagawa-Kawai Y., Egorova N., Gervain J., Filippin L. Neural correlates of infant accent discrimination: an fNIRS study. Dev. Sci. 2014;17(4):628–635. doi: 10.1111/desc.12160. [DOI] [PubMed] [Google Scholar]
  16. DeCasper A.J., Fifer W.P. Of human bonding: newborns prefer their mother’s voice. Science. 1980;208(4448):1174–1176. doi: 10.1126/science.7375928. [DOI] [PubMed] [Google Scholar]
  17. Dehaene-Lambertz G., Dehaene S., Hertz-Pannier L. Functional neuroimaging of speech perception in infants. Science. 2002;298(5600):2013–2015. doi: 10.1126/science.1077066. [DOI] [PubMed] [Google Scholar]
  18. Dehaene-Lambertz G., Montavont A., Jobert A., Allirol L., Dubois J. Language or music, mother or Mozart? Structural and environmental influence on infants’ language networks. Brain Lang. 2010;114(2):53–65. doi: 10.1016/j.bandl.2009.09.003. [DOI] [PubMed] [Google Scholar]
  19. deRegnier R., Nelson C.A., Thomas K.M., Wewerka S., Georgieff M.K. Neurophysiologic evaluation of auditory recognition memory in healthy newborn infants and infants of diabetic mothers. J. Pediatr. 2000;137(6):777–784. doi: 10.1067/mpd.2000.109149. [DOI] [PubMed] [Google Scholar]
  20. Dubois J., Hertz-Pannier L., Cachia A., Mangin J.F., Le Bihan D. Structural asymmetries in the infant language and sensori-motor networks. Cereb. Cortex. 2009;19(2):414–423. doi: 10.1093/cercor/bhn097. [DOI] [PubMed] [Google Scholar]
  21. Field A.P. 2nd edition. Sage; London: 2005. Discovering Statistics Using SPSS. [Google Scholar]
  22. Funane T., Homae F., Watanabe H., Kiguchi M., Taga G. Greater contribution of cerebral than extracerebral hemodynamics to near-infrared spectroscopy signals for functional activation and resting-state connectivity in infants. Neurophotonics. 2016;1(2) doi: 10.1117/1.NPh.1.2.025003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Gemignani J., Middell E., Barbour R.L., Graber H.L., Blankertz B. Improving the analysis of near-infrared spectroscopy data with multivariate classification of hemodynamic patterns: a theoretical formulation and validation. J. Neural Eng. 2018;15(4) doi: 10.1088/1741-2552/aabb7c. [DOI] [PubMed] [Google Scholar]
  24. Gervain J., Mehler J., Werker J.F., Nelson C.A., Csibra G. Near-infrared spectroscopy: a report from the McDonnell infant methodology consortium. Dev. Cogn. Neurosci. 2011;1(1):22–46. doi: 10.1016/j.dcn.2010.07.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Grossmann T., Oberecker R., Koch S.P., Friederici A. The developmental origins of voice processing in the human brain. Neuron. 2010;65(6):852–858. doi: 10.1016/j.neuron.2010.03.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Haber S.N., Knutson B. The reward circuit: linking primate anatomy and human imaging. Neuropsychopharmacology. 2010;35(1):4–26. doi: 10.1038/npp.2009.129. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Heeger D.J., Ress D. What does fMRI tell us about neural activity? Nat. Rev. Neurosci. 2002;3(2):142–151. doi: 10.1038/nrn730. [DOI] [PubMed] [Google Scholar]
  28. Hettmansperger T.P., McKean J.W. 2nd eds. Chapman-Hall.; New York: 2011. Robust Nonparametric Statistical Methods. [Google Scholar]
  29. Hocking R.R. Brooks/Cole; Monterey, California: 1985. The Analysis of Linear Models. [Google Scholar]
  30. Homae F., Watanabe H., Nakano T., Asakawa K., Taga G. The right hemisphere of sleeping infant perceives sentential prosody. Neurosci. Res. 2006;54(4):276–280. doi: 10.1016/j.neures.2005.12.006. [DOI] [PubMed] [Google Scholar]
  31. Homae F., Watanabe H., Nakano T., Taga G. Large-scale brain networks underlying language acquisition in early infancy. Front. Psychol. 2011;2:93. doi: 10.3389/fpsyg.2011.00093. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Imafuku M., Hakuno Y., Uchida-Ota M., Yamamoto J., Minagawa Y. “Mom called me!” Behavioral and prefrontal responses of infants to self-names spoken by their mothers. Neuroimage. 2014;103:476–484. doi: 10.1016/j.neuroimage.2014.08.034. [DOI] [PubMed] [Google Scholar]
  33. Issard C., Gervain J. Variability of the hemodynamic response in infants: influence of experimental design and stimulus complexity. Dev. Cogn. Neurosci. 2018;33:182–193. doi: 10.1016/j.dcn.2018.01.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Kondo K., Saleem K.S., Price J.L. Differential connections of the temporal pole with the orbital and medial prefrontal networks in macaque monkeys. J. Comp. Neurol. 2003;465(4):499–523. doi: 10.1002/cne.10842. [DOI] [PubMed] [Google Scholar]
  35. Kotilahti K., Nissilä I., Huotilainen M., Mäkelä R., Gavrielides N. Bilateral hemodynamic responses to auditory stimulation in newborn infants. Neuroreport. 2005;16(12):1373–1377. doi: 10.1097/01.wnr.0000175247.35837.15. [DOI] [PubMed] [Google Scholar]
  36. Kozberg M.G., Chen B.R., DeLeo S.E., Bouchard M.B., Hillman E.M.C. Resolving the transition from negative to positive blood oxygen level-dependent responses in the developing brain. Proc. Natl. Acad. Sci. U.S.A. 2013;110(11):4380–4385. doi: 10.1073/pnas.1212785110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Kriegstein K.V., Giraud A.L. Distinct functional substrates along the right superior temporal sulcus for the processing of voices. Neuroimage. 2004;22(2):948–955. doi: 10.1016/j.neuroimage.2004.02.020. [DOI] [PubMed] [Google Scholar]
  38. Kusaka T., Kawada K., Okubo K., Nagano K., Namba M. Noninvasive optical imaging in the visual cortex in young infants. Hum. Brain Mapp. 2004;22(2):122–132. doi: 10.1002/hbm.20020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Lachaux J.P., Rodriguez E., Martinerie J., Varela F.J. Measuring phase synchrony in brain signals. Hum. Brain Mapp. 1999;8(4):194–208. doi: 10.1002/(SICI)1097-0193(1999)8:4&#x0003c;194::AID-HBM4&#x0003e;3.0.CO;2-C. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Lancaster J.L., Woldorff M.G., Parsons L.M., Liotti M., Freitas C.S. Automated Talairach atlas labels for functional brain mapping. Hum. Brain Mapp. 2000;10(3):120–131. doi: 10.1002/1097-0193(200007)10:3&#x0003c;120::AID-HBM30&#x0003e;3.0.CO;2-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Leroy F., Glasel H., Dubois J., Hertz-Pannier L., Thirion B. Early maturation of the linguistic dorsal pathway in human infants. J. Neurosci. 2011;31(4):1500–1506. doi: 10.1523/JNEUROSCI.4141-10.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Liao S.M., Gregg N.M., White B.R., Zeff B.W., Bjerkaas K.A. Neonatal hemodynamic response to visual cortex activity: high-density near-infrared spectroscopy study. J. Biomed. Opt. 2010;15(2) doi: 10.1117/1.3369809. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Liu H., Qin W., Li W., Fan L., Wang J. Connectivity-based parcellation of the human frontal pole with diffusion tensor imaging. J. Neurosci. 2013;33(16):6782–6790. doi: 10.1523/JNEUROSCI.4882-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Lloyd-Fox S., Blasi A., Elwell C.E. Illuminating the developing brain: the past, present and future of functional near infrared spectroscopy. Neurosci. Biobehav. Rev. 2010;34(3):269–284. doi: 10.1016/j.neubiorev.2009.07.008. [DOI] [PubMed] [Google Scholar]
  45. Mahmoudzadeh M., Dehaene-Lambertz G., Fournier M., Kongolo G., Goudjil S. Syllabic discrimination in premature human infants prior to complete formation of cortical layers. Proc. Natl. Acad. Sci. U.S.A. 2013;110(12):4846–4851. doi: 10.1073/pnas.1212220110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Maki A., Yamashita Y., Ito Y., Watanabe E., Koizumi H. Spatial and temporal analysis of human motor activity using non-invasive NIR topography. Med. Phys. 1995;22(12):1997–2005. doi: 10.1118/1.597496. [DOI] [PubMed] [Google Scholar]
  47. Markowitsch H.J. Which brain regions are critically involved in the retrieval of old episodic memory? Brain Res. Rev. 1995;21(2):117–127. doi: 10.1016/0165-0173(95)00007-0. [DOI] [PubMed] [Google Scholar]
  48. Matsui M., Homae F., Tsuzuki D., Watanabe H., Katagiri M. Referential framework for transcranial anatomical correspondence for fNIRS based on manually traced sulci and gyri of an infant brain. Neurosci. Res. 2014;80:55–68. doi: 10.1016/j.neures.2014.01.003. [DOI] [PubMed] [Google Scholar]
  49. May L., Byers-Heinlein K., Gervain J., Werker J.F. Language and the newborn brain: does prenatal language experience shape the neonate neural response to speech? Front. Psychol. 2011;2:222. doi: 10.3389/fpsyg.2011.00222. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. McCarthy R.A., Warrington E.K. Actors but not scripts: the dissociation of people and events in retrograde amnesia. Neuropsychologia. 1992;30(7):633–644. doi: 10.1016/0028-3932(92)90068-w. [DOI] [PubMed] [Google Scholar]
  51. Meek J.H., Firbank M., Elwell C.E., Atkinson J., Braddick O., Wyatt J.S. Regional haemodynamic responses to visual stimulation in awake infants. Pediatr. Res. 1998;43(6):840–843. doi: 10.1203/00006450-199806000-00019. [DOI] [PubMed] [Google Scholar]
  52. Mehler J., Bertoncini J., Barrière M., Jassik-Gerschenfeld D. Infant recognition of mother’s voice. Perception. 1978;7(5):481–497. doi: 10.1068/p070491. [DOI] [PubMed] [Google Scholar]
  53. Minagawa-Kawai Y., Cristià A., Dupoux E. Cerebral lateralization and early speech acquisition: a developmental scenario. Dev. Cogn. Neurosci. 2011;1(3):217–232. doi: 10.1016/j.dcn.2011.03.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Minagawa-Kawai Y., van der Lely H., Ramus F., Sato Y., Mazuka R., Dupoux E. Optical brain imaging reveals general auditory and language-specific processing in early infant development. Cereb. Cortex. 2011;21(2):254–261. doi: 10.1093/cercor/bhq082. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Molavi B., May L., Gervain J., Carreiras M., Werker J.F. Analyzing the resting state functional connectivity in the human language system using near infrared spectroscopy. Front. Hum. Neurosci. 2014;7(921):1–9. doi: 10.3389/fnhum.2013.00921. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Montague D.P.F., Walker-Andrews A.S. Mothers, fathers, and infants: the role of person familiarity and parental involvement in infants’ perception of emotion expressions. Child Dev. 2002;73(3):1339–1352. doi: 10.1111/1467-8624.00475. [DOI] [PubMed] [Google Scholar]
  57. Moon C., Zernzach R.C., Kuhl P.K. Mothers say "baby" and their newborns do not choose to listen: a behavioral preference study to compare with ERP results. Front Hum Neurosci. 2015;9:153. doi: 10.3389/fnhum.2015.00153. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Morita T., Kochiyama T., Yamada H., Konishi Y., Yonekura Y. Difference in the metabolic response to photic stimulation of the lateral geniculate nucleus and the primary visual cortex of infants: a fMRI study. Neurosci. Res. 2000;38(1):63–70. doi: 10.1016/s0168-0102(00)00146-2. [DOI] [PubMed] [Google Scholar]
  59. Muramoto S., Yamada H., Sadato N., Kimura H., Konishi Y. Age-dependent change in metabolic response to photic stimulation of the primary visual cortex in infants: functional magnetic resonance imaging study. J. Comput. Assist. Tomogr. 2002;26(6):894–901. doi: 10.1097/00004728-200211000-00007. [DOI] [PubMed] [Google Scholar]
  60. Nakamura K., Kawashima R., Sugiura M., Kato T., Nakamura A. Neural substrates for recognition of familiar voices: a PET study. Neuropsychologia. 2010;39(10):1047–1054. doi: 10.1016/s0028-3932(01)00037-9. [DOI] [PubMed] [Google Scholar]
  61. Naoi N., Minagawa-Kawai Y., Kobayashi A., Takeuchi K., Nakamura K. Cerebral responses to infant-directed speech and the effect of talker familiarity. NeuroImage. 2012;59(2):1735–1744. doi: 10.1016/j.neuroimage.2011.07.093. [DOI] [PubMed] [Google Scholar]
  62. Okamoto M., Dan I. Automated cortical projection of head-surface locations for transcranial functional brain mapping. NeuroImage. 2005;26(1):18–28. doi: 10.1016/j.neuroimage.2005.01.018. [DOI] [PubMed] [Google Scholar]
  63. Olson I.R., Plotzker A., Ezzyat Y. The enigmatic temporal pole: a review of findings on social and emotional processing. Brain. 2007;130(7):1718–1731. doi: 10.1093/brain/awm052. [DOI] [PubMed] [Google Scholar]
  64. Otnes R.K., Enochson L. John Wiley & Sons; New York: 1972. Digital Time Series Analysis. [Google Scholar]
  65. Parise E., Csibra G. Electrophysiological evidence for the understanding of maternal speech by 9-month-old infants. Psychol. Sci. 2012;23(7):728–733. doi: 10.1177/0956797612438734. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Peña M., Maki A., Kovacić D., Dehaene-Lambertz G., Koizumi H. Sounds and silence: an optical topography study of language recognition at birth. Proc. Natl. Acad. Sci. U.S.A. 2003;100(20):11702–11705. doi: 10.1073/pnas.1934290100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Perani D., Saccuman M.C., Scifo P., Anwander A., Spada D. Neural language networks at birth. Proc. Natl. Acad. Sci. U.S.A. 2011;108(38):16056–16061. doi: 10.1073/pnas.1102991108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Purhonen M., Kilpeläinen-Leesb R., Valkonen-Korhonena M., Karhub J., Lehtonena J. Four-month-old infants process own mother’s voice faster than unfamiliar voices—electrical signs of sensitization in infant brain. Cogn. Brain Res. 2005;24:627–633. doi: 10.1016/j.cogbrainres.2005.03.012. [DOI] [PubMed] [Google Scholar]
  69. Ratnarajah N., Rifkin-Graboi A., Fortier M.V., Chong Y.S., Kwek K. Structural connectivity asymmetry in the neonatal brain. NeuroImage. 2013;75:187–194. doi: 10.1016/j.neuroimage.2013.02.052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Sander D., Grandjean D., Pourtois G., Schwartz S., Seghier M.L. Emotion and attention interactions in social cognition: brain regions involved in processing anger prosody. NeuroImage. 2005;28(4):848–858. doi: 10.1016/j.neuroimage.2005.06.023. [DOI] [PubMed] [Google Scholar]
  71. Sato H., Hirabayashi Y., Tsubokura H., Kanai M., Ashida T. Cerebral hemodynamics in newborn infants exposed to speech sounds: a whole-head optical topography study. Hum. Brain Mapp. 2012;33(9):2092–2103. doi: 10.1002/hbm.21350. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Schreiber T., Schmitz A. Improved surrogate data for nonlinearity tests. Phys. Rev. Lett. 1996;77(4):635. doi: 10.1103/PhysRevLett.77.635. [DOI] [PubMed] [Google Scholar]
  73. Scholle S., Schäfer T. Atlas of states of sleep and wakefulness in infants and children. Somnologie. 1999;3:163–165. [Google Scholar]
  74. Siddappa A.M., Georgieff M.K., Wewerka S., Worwa C., Nelso C.A., Deregnier R.A. Iron deficiency alters auditory recognition memory in newborn infants of diabetic mothers. Pediatr. Res. 2004;55(6):1034–1041. doi: 10.1203/01.pdr.0000127021.38207.62. [DOI] [PubMed] [Google Scholar]
  75. Sirotin Y.B., Hillman E.M., Bordier C., Das A. Spatiotemporal precision and hemodynamic mechanism of optical point spreads in alert primates. Proc. Natl. Acad. Sci. U.S.A. 2009;106(43):18390–18395. doi: 10.1073/pnas.0905509106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Taga G., Asakawa K., Maki A., Konishi Y., Koizumi H. Brain imaging in awake infants by near-infrared optical topography. Proc. Natl. Acad. Sci. U.S.A. 2003;100(19):10722–10727. doi: 10.1073/pnas.1932552100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Taga G., Watanabe H., Homae F. Spatiotemporal properties of cortical haemodynamic response to auditory stimuli in sleeping infants revealed by multi-channel near-infrared spectroscopy. Philos. Trans. Royal Soc. A. 2011;369:4495–4511. doi: 10.1098/rsta.2011.0238. [DOI] [PubMed] [Google Scholar]
  78. Tak S., Kempny A.M., Friston K.J., Leff A.P., Penny W.D. Dynamic causal modelling for functional near-infrared spectroscopy. Neuroimage. 2015;111:338–349. doi: 10.1016/j.neuroimage.2015.02.035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Tass P., Rosenblum M.G., Weule J., Kurths J., Pikovsky A. Detection of n:m phase locking from noisy data: application to magnetoencephalography. Phys. Rev. Lett. 1998;81(15):3291–3294. [Google Scholar]
  80. Therien J.M., Worwa C.T., Mattia F.R., deRegnier R.O. Altered pathways for auditory discrimination and recognition memory in preterm infants. Dev. Med. Child. Neurol. 2004;46:816–824. doi: 10.1017/s0012162204001434. [DOI] [PubMed] [Google Scholar]
  81. Tsuzuki D., Jurcak V., Singh A.K., Okamoto M., Watanabe E., Dan I. Virtual spatial registration of stand-alone functional NIRS data to MNI space. NeuroImage. 2007;34(4):1506–1518. doi: 10.1016/j.neuroimage.2006.10.043. [DOI] [PubMed] [Google Scholar]
  82. Tsuzuki D., Homae F., Taga G., Watanabe H., Matsui M., Dan I. Macroanatomical landmarks featuring junctions of major sulci and fissures and scalp landmarks based on the international 10–10 system for analyzing lateral cortical development of infants. Front. Neurosci. 2017;11:394. doi: 10.3389/fnins.2017.00394. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Tzourio-Mazoyer N., Landeau B., Papathanassiou D., Crivello F., Etard O. Automated anatomical labelling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. NeuroImage. 2002;15(1):273–289. doi: 10.1006/nimg.2001.0978. [DOI] [PubMed] [Google Scholar]
  84. Uchida O.M., Arimitsu T., Yatabe K., Ikeda K., Takahashi T., Minagawa Y. Effect of mother’s voice on neonatal respiratory activity and EEG delta amplitude. Dev. Psychobiol. 2017 doi: 10.1002/dev.21596. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Uchida O.M., Arimitsu T., Yatabe K., Ikeda K., Takahashi T., Minagawa Y. Persistent functional connectivity between frontal and temporal cortices while neonates hear their mothers’ voice. Japan Women’s Univ. J. 2015;25:93–101. (In Japanese) [Google Scholar]
  86. Webb A.R., Heller H.T., Benson C.B., Lahav A. Mother’s voice and heartbeat sounds elicit auditory plasticity in the human brain before full gestation. Proc. Natl. Acad. Sci. U.S.A. 2015;112(10):3152–3157. doi: 10.1073/pnas.1414924112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Yamada H., Sadato N., Konishi Y., Kimura K., Tanaka M. A rapid brain metabolic change in infants detected by fMRI. Neuroreport. 1997;8(17):3775–3778. doi: 10.1097/00001756-199712010-00024. [DOI] [PubMed] [Google Scholar]
  88. Yamada H., Sadato N., Konishi Y., Muramoto S., Kimura K. A milestone for normal development of the infantile brain detected by functional MRI. Neurology. 2000;55(2):218–223. doi: 10.1212/wnl.55.2.218. [DOI] [PubMed] [Google Scholar]
  89. Yamada T., Umeyama S., Matsuda K. Separation of fNIRS signals into functional nad systemic components based on differences in hemodynamic modalities. PLoS One. 2012;7(11) doi: 10.1371/journal.pone.0050271. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Yamamoto T., Kato T. Paradoxical correlation between signal in functional magnetic resonance imaging and deoxygenated haemoglobin content in capillaries: a new theoretical explanation. Phys. Med. Biol. 2002;47:1121–1141. doi: 10.1088/0031-9155/47/7/309. [DOI] [PubMed] [Google Scholar]
  91. Zatorre R.J., Evans A.C., Meyer E., Gjedde A. Lateralization of phonetic and pitch discrimination in speech processing. Science. 1992;256(5058):846–849. doi: 10.1126/science.1589767. [DOI] [PubMed] [Google Scholar]
  92. Zhang J., Evans A., Hermoye L., Lee S.K., Wakana S. Evidence of slow maturation of the superior longitudinal fasciculus in early childhood by diffusion tensor imaging. NeuroImage. 2007;38(2):239–247. doi: 10.1016/j.neuroimage.2007.07.033. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

mmc1.docx (365.4KB, docx)

Articles from Developmental Cognitive Neuroscience are provided here courtesy of Elsevier

RESOURCES