Abstract
Our brains constantly filter incoming sounds to understand our environment. While extensively studied in adults, how this ability develops across childhood remains unclear. We recorded intracranial brain activity from 54 participants aged 4-21 while they watched movie clips containing simultaneous speech and music. We used deep neural networks to separate the mixed audio into isolated speech and music streams, then built encoding models to determine which stream best predicted neural responses in the auditory cortex. Although participants heard only the original mixture with no instruction to attend to either stream, higher-order auditory regions including the superior temporal gyrus (STG), superior temporal sulcus (STS) and middle temporal gyrus (MTG), responded preferentially to speech. This speech-bias strengthened with age in STG, suggesting that this region progressively sharpens its representation of socially relevant sound across development. These findings indicate that speech prioritization in the developing brain emerges automatically, without directed attention.
Full Text
The Full Text of this preprint is available as a PDF (3.0 MB). The Web version will be available soon.
