Skip to main content

This is a preprint.

It has not yet been peer reviewed by a journal.

The National Library of Medicine is running a pilot to include preprints that result from research funded by NIH in PMC and PubMed.

bioRxiv logoLink to bioRxiv
[Preprint]. 2026 Mar 13:2026.03.12.710296. [Version 1] doi: 10.64898/2026.03.12.710296

Human auditory cortex preferentially tracks speech over music without explicit attention

Rajvi Agravat, Maansi Desai, Alyssa M Field, Sandra Georges, Jacob Leisawitz, Gabrielle Foox, Saman Asghar, Dave Clarke, Elizabeth C Tyler-Kabara, M Omar Iqbal, Andrew J Watrous, Anne E Anderson, Howard L Weiner, Liberty S Hamilton
PMCID: PMC13060832  PMID: 41959447

Abstract

Our brains constantly filter incoming sounds to understand our environment. While extensively studied in adults, how this ability develops across childhood remains unclear. We recorded intracranial brain activity from 54 participants aged 4-21 while they watched movie clips containing simultaneous speech and music. We used deep neural networks to separate the mixed audio into isolated speech and music streams, then built encoding models to determine which stream best predicted neural responses in the auditory cortex. Although participants heard only the original mixture with no instruction to attend to either stream, higher-order auditory regions including the superior temporal gyrus (STG), superior temporal sulcus (STS) and middle temporal gyrus (MTG), responded preferentially to speech. This speech-bias strengthened with age in STG, suggesting that this region progressively sharpens its representation of socially relevant sound across development. These findings indicate that speech prioritization in the developing brain emerges automatically, without directed attention.

Full Text

The Full Text of this preprint is available as a PDF (3.0 MB). The Web version will be available soon.


Articles from bioRxiv are provided here courtesy of Cold Spring Harbor Laboratory Preprints

RESOURCES