Abstract
Vocal intonation, a fundamental element of speech, is pivotal for comprehending and communicating effectively. Nevertheless, children suffering from hearing impairment encounter difficulties in recognizing vocal intonation patterns, primarily stemming from their auditory deficits. In 2020, a study conducted at Tianjin Medical University General Hospital in Tianjin, China, recruited five deaf children and two children with normal hearing (male; mean age = 10.21 ± 0.4 years) to compare the differences between deaf and normal children in four Chinese tone recognition tasks. The results revealed that (1) Due to hearing loss, some of the auditory cortices responsible for processing vocal intonation in deaf children do not function optimally, (2) When decoding vocal intonation information, deaf children might utilize alternative neural pathways or networks, (3) Deaf children exhibit hemispheric specialization in their processing of vocal intonation cues.
Keywords: Deaf children, Vocal tones, FMRI, Brain regions
Subject terms: Psychology, Auditory system, Cognitive neuroscience
Introduction
The incidence of deafness in children is on an upward trend. There are numerous challenges in their daily lives due to the complexities associated with language acquisition and communication. Speech, an essential component of human interaction, is significantly influenced by tonal variations, where different tones can alter the meaning of the same word1. Deaf children(DC), in contrast, struggle with the accurate comprehension, the articulation of speech sounds, and the differentiation of tones. These are primarily from their hearing and speech impairments.
In recent years, functional magnetic resonance imaging (fMRI) has emerged as a pivotal tool in the study of speech perception. This technique enables the detailed examination of the neural mechanisms involved in speech processing by directly capturing the brain’s responses to auditory stimuli2. Specifically, fMRI has offered novel insights into the speech impairments experienced by deaf children. Also, a unique perspective has been provided on the neurological deviations impacting their speech perception abilities3.
Recent advancements in fMRI research have shown a notable increase in the investigation of tone recognition and its implications for deaf children. Firstly, within the realm of tone recognition studies, evidence suggests that individuals proficient in tonal languages exhibit enhanced cerebral activation during the processing of vocal tones. It indicates a specialized neural mechanism for tone recognition4. Specifically, Kwok et al. have delineated the left inferior frontal gyrus, right middle temporal gyrus, and bilateral superior temporal gyrus as critical neural substrates in the perception of tones, as evidenced by fMRI analyses5. Further exploration into the neural basis of lexical tone perception in visual Chinese word recognition has unveiled that bilateral frontoparietal regions play a pivotal role in the extraction of printed lexical tones. An extensive network is involved, encompassing frontal, parietal, motor, and cingulate gyrus regions. It suggests a multifaceted neural mechanism for Chinese tone reading that does not engage the temporal regions traditionally associated with auditory tone recognition6. Moreover, Chien et al. have identified contributions from bilateral temporoparietal semantic and subcortical regions in tone processing, a finding exclusive to Mandarin speakers, indicating language-specific neural adaptations7. Si et al. have uncovered evidence of a distributed collaborative cortical network. The evidence underpins the categorical processing of lexical tones among speakers of tonal languages, and highlights the complexity of tone processing at the neural level8. The discourse on hemispheric lateralization has evolved, with propositions that while tones and segments may be processed concurrently in the left and right hemispheres, the integration or resultant product of this processing predominantly occurs in the left hemisphere9. This suggests a sophisticated interhemispheric coordination for the processing and integration of linguistic tones. The intricate neural pathways involved in tone recognition and its relevance to language acquisition among deaf children have been underlined.
In the expansive field of neuroscience and language science, significant research has focused on elucidating the neural underpinnings of vocal tone recognition. However, investigations into these mechanisms in deaf children, particularly regarding the recognition of Chinese tones, are comparatively less prevalent. Our study seeks to address this gap by exploring the neural mechanisms that underpin tone recognition in deaf children who speak Mandarin. This line of inquiry is crucial for advancing our understanding of how tone information is processed by this distinct group. We aim to refine educational and rehabilitative methodologies tailored to their specific needs. Previous research indicates that early cochlear implantation plays a pivotal role in aiding deaf children in differentiating Mandarin tones. Children who have had prolonged exposure to cochlear implants (CI) demonstrate the ability to produce intelligible speech, which is closely linked to their performance in tone production10–12. By integrating these insights, our research aims not only to deepen the theoretical frameworks surrounding neural processing in deaf children but also to enhance intervention strategies that support their linguistic development and integration into communicative environments. Overall, our investigation into the neural mechanisms of tone recognition in Mandarin-speaking deaf children is poised to contribute significantly to the broader understanding of tone processing’s neural bases. These insights are expected to propel forward both academic inquiry and practical applications, fostering improved outcomes for deaf children’s language acquisition.
Materials and methods
Subjects
Seven right-handed male children (mean age = 10.21 ± 0.4 years) from Tianjin were recruited for this study, consisting of five DC and two hearing children (HC). Prior to participation, each child underwent a medical evaluation to ensure their physical health and suitability for the study. The HC group, all of whom had normal or corrected-to-normal vision, reported no neurological disorders or hearing impairments. The DC group was classified according to the criteria established by the World Health Organization (WHO). All participants in this group had postnatally acquired hearing loss of 90 dBHL or greater, occurring before the age of 12. These children, with bilateral profound hearing loss, typically used hearing aids (with post-aid hearing reaching normal levels defined as hearing loss < 60 dB) but removed them before entering the MRI room for the study. All DC participants underwent pure-tone audiometry prior to the study to determine their hearing thresholds. This test ensures that they could hear the stimuli during the experiment. The stimulus volume was adjusted based on these thresholds to ensure that the sounds were audible during the test. To obtain a comprehensive understanding of the background of the DC participants, demographic information was collected via a questionnaire. This included data on the type of hearing loss, age, hearing aid usage, and experience with Mandarin Pinyin. All participants in the DC group had undergone systematic Mandarin Pinyin training and were able to recognize the four tones of Mandarin Chinese. Written informed consent was provided by the guardians of all participants and approved by the Research Ethics Review Committee of Tianjin University of Technology. All procedures were carried out in accordance with relevant ethical guidelines and regulations. Throughout the study, participants received compensation and continued support and guidance. The participants’ pure-tone audiometry information is shown in Table 1.
Table 1.
Statistics of pure tone testing.
| DC | HC | Statistic | P value | |
|---|---|---|---|---|
| Gender (Male/Female) | 2/3 | 1/1 | 0.01 | 0.92 |
| Age(year) | 10.21 ± 0.4 | 10.26 |
|
0.91 |
| Duration of Hearing Loss | 9.19 ± 1.27 | |||
| Age Started Wearing Hearing Aids | 1.67 ± 0.22 | |||
| Left500Hz | 81.79 ± 18.15 | 7.35 ± 3.93 |
|
< 0.001 |
| Left1000Hz | 96.92 ± 19.21 | 8.38 ± 4.88 |
26.12 |
< 0.001 |
| Left2000Hz | 101.15 ± 18.72 | 6.32 ± 4.81 |
28.69 |
< 0.001 |
| Left4000Hz | 103.94 ± 18.03 | 10.29 ± 9.76 |
27.01 |
< 0.001 |
| Left8000Hz | 92.94 ± 12.01 | 10.29 ± 10.93 |
30.56 |
< 0.001 |
| Right500Hz | 93.07 ± 17.34 | 13.08 ± 5.07 |
25.92 |
< 0.001 |
| Right1000Hz | 108.97 ± 13.72 | 18.38 ± 5.32 |
36.15 |
< 0.001 |
| Right2000Hz | 108.97 ± 15.09 | 14.26 ± 5.65 |
34.50 |
< 0.001 |
| Right4000Hz | 108.58 ± 17.12 | 14.55 ± 6.78 |
30.00 |
< 0.001 |
| Right8000Hz | 94.74 ± 10.57 | 10.29 ± 8.34 |
37.48 |
< 0.001 |
Stimulation tasks
The experimental stimuli consisted of 96 dyads, each formed by two tones of 500 Hz frequency and 100 ms duration, synthesized using E-prime software (Psychology Software Tools, Inc., version 2.0). The auditory stimuli included voice pairs articulating the numerals “one”, “two”, “three”, and “four”, with each numeral presented in four unique permutations. These stimuli were recorded by a male speaker and represent the four basic Mandarin lexical tones: Tone 1, Tone 2, Tone 3, and Tone 4. All audio files for the tone stimuli were professionally recorded to ensure clarity and standardization of the sounds. The combinations of these tones were randomly sequenced across two separate blocks of the experiment. The stimulus materials we selected encompass speech stimuli with varying pitch and tone contours, particularly designed to address the characteristics of Mandarin tones. These stimulus materials simulate tone variations by presenting different pitch and tone contour changes. It aims to investigate the brain activation patterns of deaf children and children with normal hearing during tone perception.
Experimental steps
We performed fMRI while the participants performed a tone recognition task. Initially, four identical Hanyu Pinyin characters appeared on the screen, as shown in Fig. 1, each accompanied by the tones “Tone 1,” “Tone 2,” “Tone 3,” and “Tone 4”. Subsequently, participants were instructed to listen to the sounds and classify them by pressing the “1”, “2”, “3”, or “4” buttons according to the pitch pattern, categorizing them into one of four categories. Participants practiced establishing category response mappings prior to scanning. Auditory stimuli were administered and managed via E-prime software (Psychology Software Tools, Inc.; version 2.0). To minimize the impact of scanner noise on auditory perception, a custom sparsely sampled fMRI sequence was employed. During this sequence, stimuli were delivered during a silent interval of 1000 ms between each imaging acquisition. Each experiment was synchronized with the onset of each image acquisition to ensure that the stimuli were presented during the silent interval. The stimulus sequence was divided into two blocks. In each block (i.e., fMRI run or session), 48 tone pairs were presented in a random order. The presentation was controlled by E-prime, and participants identified these stimuli in both blocks, so each subject performed a total of 96 experiments. For each experiment, categorical responses and reaction times (RT) were recorded for each participant. Additionally, all participants wore magnetically shielded headphones during the experiment. To minimize the impact of MRI environment noise on auditory perception, all stimuli were delivered through the headphones. The DC group removed their hearing aids during the test, and the stimulus volume was set based on each participant’s pure-tone audiometry results. Therefore, volume above their hearing threshold could be guaranteed to ensure that all participants could accurately perceive the stimuli.
Fig. 1.

Example image.
fMRI data acquisition
MRI data were collected using a Siemens 3-Tesla PRISMA system equipped with a 32-channel head coil at Tianjin Medical University General Hospital. Resting-state fMRI images were captured using a gradient-echo (GRE) multiband echo-planar imaging (EPI) sequence with the following specifications: TR/TE = 800/30 ms, flip angle = 56°, field of view (FOV) = 104 × 104 mm, and a slice thickness of 1.5 mm. Additionally, high-resolution structural images were obtained using a magnetization-prepared rapid gradient-echo (MP-RAGE) sequence, which included 188 slices, TR = 2000 ms, TE = 30 ms, and a flip angle of 8°. To mitigate the effects of magnetic field fluctuations, the initial 10 time points of each resting-state sequence were designated as dummy scans.
fMRI data preprocessing
The fMRI data were preprocessed using SPM12 (http://www.fil.ion.ucl.ac.uk/spm/) to achieve slice-time correction based on intermediate times. Head motion correction was performed on the original functional images using least squares and a six-parameter (rigid-body) spatial transformation13. After head motion correction, all images were spatially aligned to the mean of the images using two procedures. Then the high-resolution T1 images were co-aligned with the mean functional images (i.e., reference images), and the co-aligned T1 images were processed using a unified segmentation procedure14,15. The functional images were transformed into Montreal Neurological Institute (MNI) space by realigning them in the local space. The normalized functional images were resampled to a 3 × 3 mm voxel size and smoothed using a Gaussian kernel with a full width at half maximum (FWHM) of 4 mm. For multivariate pattern analysis, preprocessing steps for the functional images included temporal slice timing correction, head-motion correction, alignment, normalization, and smoothing.
Results
The four Mandarin tones are analyzed separately, due to the varying difficulty in recognizing each tone.
Results of DC and HC when performing the tone 1 task
As shown in Tables 2 and 3; Figs. 2 and 3, when performing the Tone 1 task, the bilateral middle occipital gyrus brain area, right supplementary motor area brain area, bilateral precentral gyrus brain area and right parietal gyrus area of DC. The superior lobule brain area was activated; the bilateral middle occipital gyrus brain area and the left precentral gyrus brain area were activated in HC. The middle occipital gyrus is involved in visual processing. The supplementary motor area is responsible for motor planning and coordination. The precentral gyrus is related to motor control and voluntary movements, and the parietal gyrus is involved in multi-sensory information integration.
Table 2.
Results of DC when performing the tone 1 task.
| Brain region | Hemisphere | Cluster | Peak | Peak MNI coordinates(mm) | ||
|---|---|---|---|---|---|---|
| x | y | z | ||||
| Middle occipital gyrus | L | 1688 | 9.89 | −30 | −88 | −12 |
| R | 560 | 7.26 | 34 | −86 | −10 | |
| Supplementary motor area | R | 579 | 8.93 | 14 | 8 | 70 |
| Pre-central gyrus | L | 323 | 9.25 | −50 | −8 | 48 |
| R | 619 | 9.66 | 50 | −4 | 52 | |
| Superior parietal lobe | R | 353 | 8.38 | 36 | −54 | 64 |
The terminology ‘Brain Region’ designates the specific name of the activated neural territory, whereas ‘Hemisphere’ signifies the lateral position within the brain where this activation is localized. ‘L’ stands for the left hemisphere and ‘R’ for the right hemisphere; x, y and z are Montreal Neurological Institute (MNI) brain map coordinates.
Table 3.
Results of HC when performing the tone 1 task.
| Brain region | Hemisphere | Cluster | Peak | Peak MNI coordinates(mm) | ||
|---|---|---|---|---|---|---|
| x | y | z | ||||
| Middle occipital gyrus | L | 277 | 9.26 | −14 | −98 | 0 |
| R | 162 | 6.64 | 24 | −92 | 8 | |
| Pre-central gyrus | L | 39 | 5.93 | −56 | −6 | 46 |
The terminology ‘Brain Region’ designates the specific name of the activated neural territory, whereas ‘Hemisphere’ signifies the lateral position within the brain where this activation is localized. ‘L’ stands for the left hemisphere and ‘R’ for the right hemisphere; x, y and z are Montreal Neurological Institute (MNI) brain map coordinates.
Fig. 2.
Results of DC when performing the tone 1 task.
Fig. 3.
Results of HC when performing the tone 1 task.
Compared with DC, no activation was found in the right supplementary motor area and right parietal lobule in HC. For the middle occipital gyrus and precentral gyrus, it was found that these brain areas were bilaterally active in DC. There was activation, while HC had activation in the bilateral middle occipital gyrus area and the left hemisphere precentral gyrus area.
Results of DC and HC when performing the tone 2 task
As depicted in Figs. 4 and 5; Tables 4 and 5, distinct patterns of neural activation were observed during the execution of the Vocal Tone 2 task in DC when compared to HC. Specifically, in DC, there was bilateral activation in the middle occipital gyrus, precentral gyrus, and supplementary motor area. These regions are critical for processing auditory and motor aspects of speech.
Fig. 4.
Results of DC when performing the tone 2 task.
Fig. 5.
Results of HC when performing the tone 2 task.
Table 4.
Results of DC when performing the tone 2 task16.
| Brain region | Hemisphere | Cluster | Peak | Peak MNI coordinates(mm) |
||
|---|---|---|---|---|---|---|
| x | y | z | ||||
| Middle occipital gyrus | L | 178 | 8.37 | −28 | −100 | 8 |
| R | 39 | 7.04 | 32 | −94 | 12 | |
| Pre-central gyrus | L | 58 | 6.90 | −50 | −2 | 48 |
| R | 13 | 5.72 | 58 | 4 | 34 | |
| Supplementary motor area | L | 128 | 6.77 | −2 | −8 | 60 |
| R | 15 | 6.00 | 10 | 2 | 74 | |
The terminology ‘Brain Region’ designates the specific name of the activated neural territory, whereas ‘Hemisphere’ signifies the lateral position within the brain where this activation is localized. ‘L’ stands for the left hemisphere and ‘R’ for the right hemisphere; x, y and z are Montreal Neurological Institute (MNI) brain map coordinates.
Table 5.
Results of HC when performing the tone 2 task.
| Brain region | Hemisphere | Cluster | Peak | Peak MNI coordinates(mm) | ||
|---|---|---|---|---|---|---|
| x | y | z | ||||
| Calcarine | R | 1746 | 9.55 | 38 | −74 | −22 |
| Middle occipital gyrus | L | 463 | 8.81 | −24 | −98 | −8 |
| Middle temporal gyrus | R | 64 | 6.88 | 68 | −28 | −2 |
|
Triangle - Interior frontal gyrus |
L | 157 | 7.20 | −42 | 20 | 24 |
| Pre-central gyrus | L | 365 | 9.40 | −36 | 2 | 58 |
| Supplementary motor area | L | 152 | 6.92 | −4 | 20 | 66 |
| Middle frontal gyrus | R | 43 | 6.19 | 44 | 4 | 52 |
| Superior parietal lobe | L | 595 | 10.64 | −20 | −70 | 54 |
| R | 311 | 7.41 | 26 | −68 | 54 | |
The terminology ‘Brain Region’ designates the specific name of the activated neural territory, whereas ‘Hemisphere’ signifies the lateral position within the brain where this activation is localized. ‘L’ stands for the left hemisphere and ‘R’ for the right hemisphere; x, y and z are Montreal Neurological Institute (MNI) brain map coordinates.
In contrast, HC demonstrated a more diverse pattern of activation. The right perisylvian cortex, left middle occipital gyrus, right middle temporal gyrus, left inferior frontal gyrus (triangular part), left precentral gyrus, right supplementary motor area, left middle frontal gyrus, and superior parietal gyrus are included. These findings indicate a broader neural engagement in the processing of vocal tones, possibly reflecting the integration of auditory, motor, and linguistic information.
Upon comparative analysis, notable differences in neural activation patterns were observed between the two groups. DC did not exhibit activation in several key areas associated with language processing and auditory perception, including the perisylvian cortex, middle temporal gyrus, triangular part of the inferior frontal gyrus, middle frontal gyrus, and superior parietal gyrus. Conversely, for regions such as the middle occipital gyrus, precentral gyrus, and supplementary motor area, there was bilateral activation in DC, whereas in HC, activation was predominantly observed in the left hemisphere. These discrepancies underscore the adaptive changes in neural processing strategies employed by DC in the perception and execution of vocal tones. The bilateral activation patterns in DC suggest a compensatory mechanism that may facilitate tone recognition and production in the absence of typical auditory input. Valuable insights into the neural plasticity associated with auditory deprivation and its implications for speech and language development have been provided.
Results of DC and HC when performing the tone 3 task
The presentation of data from Figs. 6 and 7; Tables 6 and 7 elucidates the neural activation patterns observed during the execution of the Vocal Tone 3 Task in DC compared to HC. This comprehensive analysis reveals intricate differences in brain activation that highlight the unique neural strategies employed by deaf children in processing complex tonal sequences. In DC, a wide array of regions were activated. The right supplementary motor area, left precentral gyrus, right triangular inferior frontal gyrus, right insula (part of the inferior frontal gyrus), right middle frontal gyrus, bilateral superior temporal gyrus, left insula, left middle occipital gyrus, and left temporopolar area of the superior temporal gyrus are included. Additionally, the bilateral middle occipital gyrus, right medial occipital gyrus, left triangular part of the inferior frontal gyrus, left superior medial frontal gyrus, left supplementary motor area, and left medial parietal lobe were engaged during the task. These regions are involved in motor planning and coordination, language processing, auditory processing, and visual information integration.
Fig. 6.
Results of DC when performing the tone three task.
Fig. 7.
Results of executing vocal triple-time HC.
Table 6.
Results of executing the vocal triple-time DC.
| Brain region | Hemisphere | Cluster | Peak | Peak MNI coordinates(mm) | ||
|---|---|---|---|---|---|---|
| x | y | z | ||||
| Supplementary motor area | R | 650 | 9.29 | 0 | 12 | 58 |
| Pre-central gyrus | L | 213 | 7.68 | −50 | 6 | 44 |
|
Triangle - Interior frontal gyrus |
R | 38 | 5.83 | 50 | 28 | 28 |
|
Operc- Interior frontal gyrus |
R | 17 | 5.34 | 46 | 10 | 26 |
| Middle frontal gyrus | R | 21 | 5.58 | 36 | 52 | 22 |
| Superior temporal gyrus | L | 140 | 7.07 | −64 | −24 | 10 |
| R | 43 | 6.63 | 64 | −6 | 0 | |
| Insula | L | 53 | 6.48 | −32 | 28 | 0 |
| Middle occipital gyrus | L | 72 | 7.12 | −28 | −100 | 8 |
| Superior part of temporal pole | L | 24 | 6.35 | −54 | 10 | −6 |
The terminology ‘Brain Region’ designates the specific name of the activated neural territory, whereas ‘Hemisphere’ signifies the lateral position within the brain where this activation is localized. ‘L’ stands for the left hemisphere and ‘R’ for the right hemisphere; x, y and z are Montreal Neurological Institute (MNI) brain map coordinates.
Table 7.
Results of executing vocal triple-time HC.
| Brain region | Hemisphere | Cluster | Peak | Peak MNI coordinates(mm) | ||
|---|---|---|---|---|---|---|
| x | y | z | ||||
| Middle occipital gyrus | L | 458 | 7.29 | −24 | −102 | 2 |
| R | 376 | 7.60 | 24 | −94 | 4 | |
| Interior occipital gyrus | R | 77 | 6.02 | 50 | −72 | −2 |
|
Triangle - Interior frontal gyrus |
L | 781 | 8.09 | −46 | 10 | 48 |
| Medial part of superior frontal gyrus | L | 77 | 6.73 | −2 | 26 | 40 |
| Supplementary motor area | L | 44 | 6.50 | −8 | 16 | 68 |
| Interior parietal lobe | L | 44 | 5.81 | −26 | −48 | 40 |
The terminology ‘Brain Region’ designates the specific name of the activated neural territory, whereas ‘Hemisphere’ signifies the lateral position within the brain where this activation is localized. ‘L’ stands for the left hemisphere and ‘R’ for the right hemisphere; x, y and z are Montreal Neurological Institute (MNI) brain map coordinates.
Comparatively, in DC, there was noted activation in regions such as the left precentral gyrus, right insula (part of the inferior frontal gyrus), right middle frontal gyrus, bilateral superior temporal gyrus, left insula, and left temporal pole of the superior temporal gyrus. These regions are not activated in HC. Conversely, the medial occipital gyrus, left superior medial frontal gyrus, and left medial parietal lobe showed activation in HC but not in DC. Interestingly, while DC exhibited activation in the left hemisphere of the middle occipital gyrus, their hearing counterparts demonstrated bilateral activation in this region. Furthermore, the patterns of activation in the deltoid inferior frontal gyrus and supplementary motor area were hemisphere-specific. DC showing right hemisphere activation, in contrast to the left hemisphere activation observed in HC.
Results of DC and HC when performing the tone 4 task
The comparative analysis presented in Figs. 8 and 9; Tables 8 and 9 between DC and HC during specific cognitive tasks reveals a fascinating divergence in neural activation patterns. This divergence underscores the adaptive neuroplasticity in response to sensory loss and highlights distinct pathways for processing similar stimuli. For DC, neural activation was confined to the left middle occipital gyrus, left middle temporal gyrus, and right superior temporal gyrus. These regions are pivotal for visual and auditory processing, indicating a reliance on available sensory input for interpreting complex stimuli. In contrast, HC exhibited a broader spectrum of activation. The right Cerebellum_Crus1, left medial middle temporal gyrus, right middle occipital gyrus, right middle temporal gyrus, right middle frontal gyrus, left triangular part of the inferior frontal gyrus, and medial parietal regions are included. The activation in these areas suggests a more integrated approach to processing, utilizing both cerebellar coordination and cortical areas associated with higher-order cognitive functions.
Fig. 8.
Results of executing the vocal tones four-time DC.
Fig. 9.
Results of executing the vocal tones four-time HC.
Table 8.
Results of the execution of the vocal tones four-time DC.
| Brain region | Hemisphere | Cluster | Peak | Peak MNI coordinates(mm) | ||
|---|---|---|---|---|---|---|
| x | y | z | ||||
| Middle occipital gyrus | L | 125 | 5.86 | −26 | −98 | −8 |
| Middle temporal gyrus | L | 34 | 6.08 | −62 | −44 | 12 |
| Superior temporal gyrus | R | 111 | 6.05 | 62 | −8 | 6 |
The terminology ‘Brain Region’ designates the specific name of the activated neural territory, whereas ‘Hemisphere’ signifies the lateral position within the brain where this activation is localized. ‘L’ stands for the left hemisphere and ‘R’ for the right hemisphere; x, y and z are Montreal Neurological Institute (MNI) brain map coordinates.
Table 9.
Results of the implementation of vocal tones four-time HC.
| Brain region | Hemisphere | Cluster | Peak | Peak MNI coordinates(mm) | ||
|---|---|---|---|---|---|---|
| x | y | z | ||||
| Cerebelum_Crus1 | R | 169 | 6.88 | 6 | −82 | −26 |
| Interior occipital gyrus | L | 838 | 8.16 | −24 | −102 | 2 |
| Middle occipital gyrus | R | 358 | 7.98 | 26 | −96 | 2 |
| Middle temporal gyrus | R | 105 | 7.11 | 50 | −72 | −2 |
| Middle frontal gyrus | R | 56 | 6.14 | 32 | 4 | 56 |
|
Triangle - Interior frontal gyrus |
L | 1065 | 9.14 | −42 | 20 | 24 |
| Interior parietal lobe | L | 242 | 8.36 | −30 | −46 | 54 |
The terminology ‘Brain Region’ designates the specific name of the activated neural territory, whereas ‘Hemisphere’ signifies the lateral position within the brain where this activation is localized. ‘L’ stands for the left hemisphere and ‘R’ for the right hemisphere; x, y and z are Montreal Neurological Institute (MNI) brain map coordinates.
Notably, the superior temporal gyrus was activated in DC but not in HC. This finding may reflect compensatory mechanisms in deaf children, possibly leveraging residual auditory processing capabilities or cross-modal plasticity to interpret auditory-related information. Conversely, activation in the occipital and middle temporal gyrus regions was lateralized to the left hemisphere in DC and to the right hemisphere in HC. It indicates differential engagement of visual and auditory processing networks. Furthermore, regions such as the Cerebellum_Crus1, medial occipital gyrus, middle frontal gyrus, triangular part of the inferior frontal gyrus, and medial parietal regions were not activated in DC, whereas they were in HC. This absence of activation in DC could reflect the neural reorganization or prioritization of alternative processing pathways in the absence of typical auditory input.
Discussion
Our findings contrast the variations in neural activation patterns between deaf and hearing children when engaging in a task designed to assess their ability to discriminate vocal tones.
Due to hearing loss, some of the auditory cortices responsible for processing vocal intonation in deaf children do not function optimally
The delineation of neural correlates of tone perception, as highlighted in existing literature, emphasizes the integral roles of the left inferior frontal gyrus, the right middle temporal gyrus, and the bilateral superior temporal gyrus in the auditory processing of tonal variations. They underpinned the cognitive mechanisms that facilitate language comprehension and production5. However, our findings reveal a marked discrepancy in neural activation patterns between DC and HC, particularly in the pitch II and pitch IV comparisons.
Specifically, in the pitch II comparison, DC exhibited no activation in the middle temporal gyrus, inferior frontal gyrus, middle frontal gyrus, and superior parietal gyrus. These regions are critically involved in pitch processing in hearing children. The middle temporal gyrus is a key node in auditory-semantic integration. The inferior frontal gyrus is responsible for phonological working memory, and the superior parietal gyrus is related to auditory spatial attention17–19. The lack of activation in these regions is consistent with the findings of Kral et al. It suggests that the auditory cortex in congenitally deaf individuals undergoes functional decline due to prolonged deprivation of auditory input, leading to reliance on cross-modal reorganization20.
Similarly, the pitch IV comparison revealed an absence of activation in the inferior frontal gyrus, middle frontal gyrus, medial occipital gyrus, and medial parietal regions in DC. These areas are implicated in a wide range of human behavioral and cognitive functions, including emotion, memory, cognitive control, and auditory perception21,22. The lack of activation in these regions suggests a deviation from typical neural functioning in DC. It can possibly attributable to auditory deprivation and the consequent challenges in acquiring and recognizing vocal tones through traditional auditory means.
Notably, the bilateral middle occipital gyrus was significantly activated in DC during the pitch I task (Table 2). It suggests that they may compensate for auditory loss by enhancing visual attention23. This finding is consistent with Nicole et al.‘s conclusions about visual compensation in the deaf, further supporting the idea that DC rely more on visual information when processing tones24.
In conclusion, the neural activation patterns in DC during tone processing significantly differ from those observed in HC. This highlights the adaptive neuroplasticity in response to auditory deprivation and the reliance on visual and cross-modal reorganization strategies in deaf individuals.
When decoding vocal intonation information, deaf children might utilize alternative neural pathways or networks
In the tone I comparison, the DC group exhibited activation in the bilateral middle occipital gyrus and bilateral precentral gyrus, while the HC group only showed activation in the bilateral middle occipital gyrus and left precentral gyrus. The vocal Tone 3 and Tone 4 comparisons provide illuminating insights into the divergent neural mechanisms employed by DC and HC in processing vocal tones.
In the vocal tone III comparison, DC exhibited activation in areas such as the left precentral gyrus, right insula (part of the inferior frontal gyrus), right middle frontal gyrus, bilateral superior temporal gyrus, left insula, and left temporal pole of the superior temporal gyrus. These regions were not activated in HC during the same task. Notably, the insula is traditionally involved in multi-sensory integration. Its activation may reflect the DC’s reliance on non-auditory cues (such as vibrations or lip reading)25. The superior temporal gyrus, traditionally associated with auditory processing, shows activation in DC. This may indicate the preservation of the “silent auditory network” in congenitally deaf individuals, with residual function in parts of the auditory cortex26. This finding contrasts with MacSweeney et al., who found that sign language users primarily activate visual-motor networks. However the use of superior temporal gyrus in this study by DC may be related to their oral language training background3.
In the vocal Tone 4 comparison, only DC showed activation in the superior temporal gyrus, underscoring a distinct pattern of neural engagement in the absence of auditory input. This enhanced activation suggests that DC mobilize additional cognitive resources and attention to process vocal tones, compensating for their auditory loss and relying on non-auditory cues for speech perception.
In contrast, hearing children demonstrate more efficient neural networks and cognitive strategies, as evidenced by more streamlined activation patterns during vocal tone discrimination tasks27. This efficiency is further supported by the larger “activation clump size” observed in hearing children, validating the hypothesis of optimized neural processing in this group.
Moreover, existing research indicating significant activation of auditory centers in response to visual stimuli in deaf individuals aligns with our findings24. The heightened activity in regions such as the superior temporal gyrus during the third and fourth tone recognition tasks in deaf children. It suggests a reliance on visual rather than auditory information for tone processing28. This necessitates the engagement of an expanded network of brain regions, highlighting the brain’s adaptability in leveraging alternative sensory inputs for language processing.
From a neurolinguistic perspective, the processing of auditory and visual information traverses distinct pathways within the brain, and this dichotomy extends to the processing of vocal tone information29. While normal hearing children convey vocal tone information through auditory pathways to the temporal and parietal regions for processing, deaf children, impeded by their auditory deficits, may depend more on visual cues30. This reliance on a different sensory modality necessitates the use of alternative neural networks, manifesting in varied brain activation patterns during the processing of vocal tone information.
Children with hearing impairment process vocal tone information with hemispheric characteristics
The delineation of how tones and segments are concurrently processed in both hemispheres, with their integration primarily occurring in the left hemisphere, sets a foundational understanding of the brain’s approach to language processing9. Building upon this framework, our research delves into the hemispheric characteristics of vocal tone processing in children with hearing loss, revealing notable distinctions from their hearing counterparts. Our findings indicate that children with hearing loss engage different hemispheric activations when processing specific vocal tones, marking a departure from the patterns observed in hearing children. For instance, in the results of Tone 1, DC exhibited more significant activation in the bilateral middle occipital gyrus and left precentral gyrus than HC. During the processing of Tone 2, children with hearing loss exhibited significant bilateral activation in the middle occipital gyrus, precentral gyrus, and supplementary motor areas. This is the pattern not mirrored in hearing children. Conversely, Tone 3 elicited an inverse activation pattern in areas such as the middle occipital gyrus and the triangular part of the inferior frontal gyrus when compared to hearing children. Furthermore, Tone 4 processing in children with hearing loss was characterized by more pronounced activation in the left hemisphere, particularly in the middle occipital gyrus and middle temporal gyrus, whereas hearing children displayed greater activation in the right hemisphere.
These differential activation patterns underscore the nuanced nature of vocal tone processing in the brains of children with hearing impairments and suggest a functional reorganization of the brain to accommodate the limited access to auditory inputs. The adaptations observed may stem from the unique linguistic processing demands faced by children with hearing loss, necessitating a more distributed processing strategy for vocal tone information across the brain. Consequently, these children may engage alternate neural circuits and mechanisms to mitigate challenges in vocal tone recognition31.
Notably, in the Tone 4 task, children with hearing loss exhibited left hemisphere dominance (activation in left middle temporal gyrus and middle occipital gyrus, Table 8), whereas HC displayed more pronounced activation in the right hemisphere (Table 9). This difference may reflect the language-specific nature of Mandarin tones: as a tonal language, Mandarin typically involves the left hemisphere in pitch-semantic mapping32. The enhanced activation in the left hemisphere in DC may be attributed to their strategy of learning tones via visual cues (such as Chinese characters), while HC rely on the right hemisphere for non-linguistic pitch processing33. This finding partially aligns with Chien et al., though their study did not include a deaf group, underscoring the innovative nature of this research7.
Conclusion
Our investigation into the neural underpinnings of vocal tone recognition among deaf children unveils that these individuals likely engage distinct neural networks for processing tone information, diverging significantly from their hearing peers. Specifically, we observe pronounced disparities in neural activation during vocal tone processing tasks within key regions, including the middle occipital gyrus, middle temporal gyrus, superior temporal gyrus, inferior frontal gyrus, and middle frontal gyrus. This contrasts with prior research by highlighting that, despite possessing similar hemispheric patterns in tone processing as hearing children, deaf children may predominantly utilize visual cues to decipher and process vocal tones.
It is crucial to note that our study’s scope was confined to the neural mechanisms associated with the recognition of the Chinese tones in deaf children. Consequently, the generalized applicability of our findings necessitates further empirical verification across a broader spectrum of tonal distinctions. This gap underscores the imperative for subsequent investigations to not only validate these preliminary insights but also to expand the understanding of how auditory deprivation influences the neural strategies deployed for language processing. Such endeavors will enrich the comprehension of language acquisition and cognitive adaptation in the context of sensory impairments, offering valuable implications for the design and implementation of educational and therapeutic interventions tailored to the unique needs of deaf children.
Acknowledgements
We thank all the subjects for participating in and contributing to this research. We also thank Hao Ding and Wen Qin for theirs contribution to this research.
Author contributions
Conceptualization, Qiang.Li; methodology, Heng.Zhao; software, Yuan.Meng; validation, Shiyu.Li and Qiang.Li; formal analysis, Qiang.Li; investigation, Qiang.Li; resources, Qiang.Li; data curation, Qiuli.Li and Qiang.Li; writing—original draft preparation, Yuan.Meng; writing—review and editing, Yuan.Meng and Qiuli.Li; visualization, Qiang.Li; supervision, Qiang.Li; project administration, Yuan.Meng; funding acquisition, Qiang.Li. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by a research grant from CN: National Planning Office of Philosophy and Social Science[20BYY096].
Data availability
Due to privacy and ethical concerns, the data from this study will not be publicly available. Relevant data can be obtained from the corresponding author upon reasonable request.
Declarations
Competing interests
The authors declare no competing interests.
Ethical approval
This study was approved by the Medical Ethics Committee of Tianjin First Central Hospital (Review No. 2018N109KY; May,28th,2018).
Informed consent
Informed consent was obtained from all subjects involved in the study. Written informed consent was obtained from the guardians of all participants. Written informed consent has been obtained from the patients to publish this paper.
Footnotes
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Patel, A. D. & Iversen, J. R. The linguistic benefits of musical abilities. Trends Cogn. Sci.10(4), 195–201 (2006). [DOI] [PubMed] [Google Scholar]
- 2.Le Bihan, D. & Karni, A. Applications of magnetic resonance imaging to the study of human brain function. Curr. Opin. Neurobiol.5(2), 231–237 (1995). [DOI] [PubMed] [Google Scholar]
- 3.MacSweeney, M., Capek, C. M., Campbell, R. & Woll, B. The signing brain: the neurobiology of sign Language. Trends Cogn. Sci.12(11), 432–440 (2008). [DOI] [PubMed] [Google Scholar]
- 4.Kwok, V. P. Y., Guo, D., Yakpo, K., Matthews, S. & Li, H. T. Neural systems for auditory perception of lexical tones. J. Neurolinguistics. 37, 1–12 (2016). [Google Scholar]
- 5.Kwok, V. P. Y., Matthews, S., Yakpo, K. & Tan, L. H. Neural correlates and functional connectivity of lexical tone processing in reading. Brain Lang.194, 27–36 (2019). [DOI] [PubMed] [Google Scholar]
- 6.Kwok, V. P. Y. et al. A meta-analytic study of the neural system for auditory processing of lexical tones. NeuroImage158, 137–146 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Chien, P. J., Friederici, A. D., Hartwigsen, G. & Sammler, D. Neural correlates of intonation and lexical tone in tonal and non-tonal Language speakers. Hum. Brain. Mapp.41(7), 1842–1858 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Si, X. P., Zhou, W. J. & Hong, B. Cooperative cortical network for categorical processing of Chinese lexical tone. Front. Hum. Neurosci.11, 512 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Chang, H. C., Lee, H. J., Tzeng, O. J. L. & Kuo, W. J. Implicit target substitution and sequencing for lexical tone production in chinese: an fMRI study. PLOS ONE. 9(1), e83126 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Tang, P. et al. Longer cochlear implant experience leads to better production of Mandarin tones for early implanted children. Ear Hear.42(3), 671–680 (2021). [DOI] [PubMed] [Google Scholar]
- 11.Yi, L. L., Yi, H. L., Hui, M. Y., Yeou, J. C. & Jiunn, L. W. Tone production and perception and intelligibility of produced speech in Mandarin-speaking cochlear implanted children. Int. J. Audiol.57(12), 925–933 (2018). [DOI] [PubMed] [Google Scholar]
- 12.Gang, L., Sigfrid, D. S. & Yun, Z. Tone perception in Mandarin-speaking children with cochlear implants. Int. J. Audiol.56(S2), S23–S30 (2017). [DOI] [PubMed] [Google Scholar]
- 13.Friston, K. J. et al. J. Spatial registration and normalization of images. Hum. Brain. Mapp.3(3), 165–189 (1995). [Google Scholar]
- 14.Studholme, C., Hill, D. L. G. & Hawkes, D. J. An overlap invariant entropy measure of 3D medical image alignment. Pattern Recogn.32(1), 71–86 (1999). [Google Scholar]
- 15.Ashburner, J. & Friston, K. J. Unified Segmentation NeuroImage, 26(3), 839–851 (2005). [DOI] [PubMed] [Google Scholar]
- 16.Yuan, M. & Qiang, L. An fMRI study of the neural mechanisms of second and third tone recognition in deaf children. J. Cogn. Res. Artif. Intell.23(4), 45–60 (2023). [Google Scholar]
- 17.Hickok, G. & Poeppel, D. The cortical organization of speech processing. Nat. Rev. Neurosci.8(5), 393–402 (2007). [DOI] [PubMed] [Google Scholar]
- 18.Baddeley, A. Working memory and language: an overview. J. Commun. Disord.36(3), 189–208 (2003). [DOI] [PubMed] [Google Scholar]
- 19.Corbetta, M. & Shulman, G. L. Control of goal-directed and stimulus-driven attention in the brain. Nat. Rev. Neurosci.3(3), 201–215 (2002). [DOI] [PubMed] [Google Scholar]
- 20.Kral, A., Kronenberger, W. G., Pisoni, D. B. & O’Donoghue, G. M. Neurocognitive factors in sensory restoration of early deafness: A connectome model. Lancet Neurol.15(6), 610–621 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Adolphs, R. Neural systems for recognizing emotion. Curr. Opin. Neurobiol.12(2), 169–177 (2002). [DOI] [PubMed] [Google Scholar]
- 22.Koch, K. et al. Gender differences in the cognitive control of emotion: an fMRI study. Neuropsychologia45(12), 2744–2754 (2007). [DOI] [PubMed] [Google Scholar]
- 23.Bavelier, D. et al. Visual attention to the periphery is enhanced in congenitally deaf individuals. J. Neurosci.20(17), RC93 (2000). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Nicole, L., Elke, R. G., Armin, D. G. & Michael, F. Cross-modal plasticity in deaf subjects dependent on the extent of hearing loss. Cogn. Brain. Res.25(3), 884–890 (2005). [DOI] [PubMed] [Google Scholar]
- 25.Craig, A. D. How do you feel—now? The anterior Insula and human awareness. Nat. Rev. Neurosci.10(1), 59–70 (2009). [DOI] [PubMed] [Google Scholar]
- 26.Scott, S. K. & Johnsrude, I. S. The neuroanatomical and functional organization of speech perception. Trends Neurosci.26(2), 100–107 (2003). [DOI] [PubMed] [Google Scholar]
- 27.Ahmad, Z., Balsamo, L. M., Sachs, B. C. & Gaillard, W. D. Auditory comprehension of Language in young children. Neurology60(10), 1598–1605 (2003). [DOI] [PubMed] [Google Scholar]
- 28.Kral, A. Auditory critical periods: A review from system’s perspective. Neuroscience334, 165–181 (2016). [DOI] [PubMed] [Google Scholar]
- 29.Pascual-Leone, A. & Hamilton, R. The metamodal organization of the brain. Prog. Brain Res.134, 427–445 (2001). [DOI] [PubMed] [Google Scholar]
- 30.Jäncke, L., Specht, K., Shah, J. N. & Hugdahl, K. Focused attention in a simple dichotic listening task: an fMRI experiment. Cogn. Brain. Res.16(2), 257–266 (2003). [DOI] [PubMed] [Google Scholar]
- 31.Bidelman, G. M. et al. Age-related hearing loss increases full-brain connectivity while reversing directed signaling within the dorsal-ventral pathway for speech. Brain Struct. Function. 224(8), 2661–2676 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Gandour, J. et al. D. A crosslinguistic PET study of tone perception. J. Cogn. Neurosci.12(1), 207–222 (2000). [DOI] [PubMed] [Google Scholar]
- 33.Zatorre, R. J., Belin, P. & Penhune, V. B. Structure and function of auditory cortex: music and speech. Trends Cogn. Sci.6(1), 37–46 (2002). [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Due to privacy and ethical concerns, the data from this study will not be publicly available. Relevant data can be obtained from the corresponding author upon reasonable request.



















