Abstract
Studies of speech processing investigate the relationship between temporal structure in speech stimuli and neural activity. Despite clear evidence that the brain tracks speech at low frequencies (~ 1 Hz), it is not well understood what linguistic information gives rise to this rhythm. In this study, we harness linguistic theory to draw attention to Intonation Units (IUs), a fundamental prosodic unit of human language, and characterize their temporal structure as captured in the speech envelope, an acoustic representation relevant to the neural processing of speech. IUs are defined by a specific pattern of syllable delivery, together with resets in pitch and articulatory force. Linguistic studies of spontaneous speech indicate that this prosodic segmentation paces new information in language use across diverse languages. Therefore, IUs provide a universal structural cue for the cognitive dynamics of speech production and comprehension. We study the relation between IUs and periodicities in the speech envelope, applying methods from investigations of neural synchronization. Our sample includes recordings from every-day speech contexts of over 100 speakers and six languages. We find that sequences of IUs form a consistent low-frequency rhythm and constitute a significant periodic cue within the speech envelope. Our findings allow to predict that IUs are utilized by the neural system when tracking speech. The methods we introduce here facilitate testing this prediction in the future (i.e., with physiological data).
Subject terms: Cognitive neuroscience, Language, Human behaviour
Introduction
Speech processing is commonly investigated by the measurement of brain activity as it relates to the acoustic speech stimulus1–3. Such research has revealed that neural activity tracks amplitude modulations present in speech. It is generally agreed that a dominant element in the neural tracking of speech is a 5 Hz rhythmic component, which corresponds to the rate of syllables in speech4–8. The speech stimulus is also tracked at lower frequencies (< 5 Hz, e.g.2,3), but the functional role of these fluctuations is not fully understood. They are assumed to relate to the “musical” elements of speech which are above the word level—called prosody. However, prosody in the neuroscience literature is rarely investigated for its structure and function in cognition.
In contrast, the role of prosody in speech and cognition are extensively studied within the field of linguistics. Such research identifies prosodic segmentation cues that are common to all languages, and that characterize what is termed Intonation Units (IUs; Fig. 1a)9,10. Importantly, in addition to providing a systematic segmentation to ongoing naturalistic speech, IUs capture the pacing of information, parceling a maximum of one new idea per IU. Thus, IUs provide a valuable construct for quantifying how ongoing speech serves cognition in individual and interpersonal contexts. The first goal of our study is to introduce this understanding of prosodic segmentation from linguistic theory to the neuroscientific community. The second goal is to put forth a temporal characterization of IUs, and hence offer a precise, theoretically-motivated interpretation of the low-frequency auditory tracking and its relevance to cognition.
When speakers talk, they produce their utterances in chunks with a specific prosodic profile, a profile which is attested in presumably all human languages regardless of their phonological and morphosyntactic structure. The prosodic profile is characterized by the intersection of rhythmic, melodic, and articulatory properties9,12–14. Rhythmically, chunks may be delimited by pauses, but more importantly, by a fast-slow dynamic of syllables (Fig. 1b): an increase in syllable-delivery rate at the beginning of a chunk, and/or lengthening of syllables at the end of a chunk15. Melodically, chunks have a continuous pitch contour, which is typically sharply reset at the onset of a new chunk. In terms of articulation, the degree of contact between articulators is strongest at the onset of a new chunk16, a dynamic that generates resets in volume at chunk onsets and decay towards their offsets. Other cues contribute to the perceptual construct of chunks albeit less systematically and frequently, such as a change in voice quality towards chunk offset9. Linguistic research has approached these prosodic chunks from different perspectives and under different names: Intonation(al) Phrases (e.g.13,17–19), intonation-groups (e.g.14), tone-groups20, and intonation units (e.g.9,12). We adopt here the term Intonation Unit to reflect that our main interest in these chunks lies in their relevance for the processing of information in naturalistic, spontaneous discourse, an aspect foregrounded by Chafe and colleagues. To this important function of IUs we turn next.
Across languages, these prosodically-defined units are also functionally comparable in that they pace the flow of information in the course of speech9,20–23. For example, when speakers develop a narrative, they do so gradually, introducing the setting, participants and the course of events in sequences of IUs, where no more than one new piece of information relative to the preceding discourse is added per IU (Box 1). This has been demonstrated both by means of qualitative discourse analysis9,21,24, and by quantifying the average amount of content items per IU. Specifically, the amount of content items per IU has been found to be very similar across languages, even when they have strikingly different grammatical profiles10. Another example for the common role of IUs in different languages pertains to the way speakers construct their (speech) actions25 (but cf.26). For example, when speakers coordinate a transition during a turn-taking sequence, they rely on prosodic segmentation (i.e., IUs): points of semantic/syntactic phrase closure are not a sufficient cue for predicting when a transition will take place, and IU design is found to serve a crucial role in timing the next turn-taking transition27–29.
Here we use recordings of spontaneous speech in natural settings to characterize the temporal structure of sequences of IUs in six languages. The sample includes well-studied languages from the Eurasian macro-area, as well as much lesser-known and -studied languages, spoken in the Indonesian-governed part of Papua by smaller speech communities. Importantly, our results generalize across this linguistic diversity, despite the substantial differences in socio-cultural settings and all aspects of grammar, including other prosodic characteristics. In contrast to previous research, we estimate the temporal structure of IUs using direct time measurements rather than word or syllable counts (Box 2). In addition, we quantify the temporal structure of IUs in relation to the speech envelope, which is an acoustic representation relevant to neural processing of speech. We find that sequences of IUs form a consistent low-frequency rhythm at ~ 1 Hz in the six sample languages, and relate this finding to recent neuroscientific accounts of the roles of slow rhythms in speech processing.
Box 1: Information is temporally structured in social interaction.
Time is crucial for organizing information in interaction, not only via prosodic cues but also through body conduct, such as gaze-direction, head and hand gestures, leg movements and body torques30. This is especially evident in task-oriented interaction, for example, in direction-giving sequences during navigation, or instruction sequences more broadly, where the many deictic words (e.g., this, there) can only be interpreted correctly when accompanied by a timely gesture. Following is such a fragment from a judo-instruction class11. The prosodic-based segmentation into IUs is represented by a break in line, such that each line corresponds to on IU. To facilitate reading, transcription conventions were simplified (the original transcription can be retrieved with the sound file from the linked corpus). Speaker overlap is marked by square brackets ( [ ] ), minimally audible to medium pauses (up to 0.6 s) are marked by sequences of dots ( … ) and punctuation marks represent different functional classes of pitch movements, indicating roughly the degree of continuity between one unit and the next (comma—continuing; period—final; double dash—cut short).
Box 2: What is a word?
The notion of a word has been argued to be untenable for both language-specific and cross-linguistic analyses (e.g.,31). We demonstrate why this is so for cross-linguistic comparison with the following example in Seneca, a member of the Northern Iroquoian branch of the Iroquoian language family spoken in Northeast America32.
The first line includes a word in the language, that is, a unit of meaning whose unit-ness is defined by morphosyntactic processes in the language. The second line includes a breakdown to meaning components (separated by hyphens and tabs), obtained through linguistic analysis and comparative evidence from related languages. Due to extensive sound changes over the years, Seneca shows a high degree of fusion between meaning components, that is, the boundaries between them are obscured and not necessarily available to speakers. Note also that these meaning components cannot normally appear as independent words, that is, without the neighboring meaning components. The third line includes a gloss per meaning component, differentiating between those with grammatical meaning (part of a grammatical paradigm; in small caps) and content items. The fourth line includes the corresponding English translation.
As evident from this example, a noteworthy distinction between Seneca and some better-known languages of the world such as English is that Seneca regularly packages an event, its participants and other meaning components within a single morphosyntactic word. Consequently, for one and the same message, Seneca IUs would contain fewer words compared to English IUs.
Many other linguistic constructs vary greatly from language to language. In fact, it seems that linguistic diversity is the rule rather than the exception, and that care should be taken to avoid a priori taxonomies that would fail when considering the next language33,34.
Materials and methods
Data
We studied the temporal structure of IUs using six corpora which included spontaneously produced conversations and unscripted narratives. The corpora were all transcribed and segmented into IUs according to the unified criteria devised by Chafe9 and colleagues (Du Bois et al.12 is a practical tutorial of this discourse transcription method). This segmentation process involves close listening to the rhythmic and melodic cues presented in the introduction, as well as performing manually-adjusted acoustic analyses, the latter particularly for the extraction of pitch contours (f0) which are used to support perceived resets in pitch. Three of the corpora were segmented by specialist teams working on their native language: the Santa Barbara Corpus of Spoken American English11, the Haifa Corpus of Spoken Hebrew35, and the Russian Multichannel Discourse corpus36. The other three corpora were segmented by a single research team whose members had varying degrees of familiarity with the languages, as part of a project studying the human ability to identify IUs in unfamiliar languages: the DoBeS Summits-PAGE Collection of Papuan Malay37, the DoBeS Wooi Documentation38, and the DoBes Yali Documentation39. The segmentation of each recording is typically performed by multiple team members and verified by a senior member experienced in auditory analyses in the language. Such was the process in the corpora above, ensuring that ambiguous cases were resolved as consistently as possible for human annotators and that the transcriptions validly represent the language at hand. Further information regarding the sample is found in Table 1 and Table S1. Supplementary Appendix 1 in the Supplementary Information elaborates on the construction of the sample and the coding and processing of IUs. From all language samples, we extracted IU onset times, noting which speaker produced a given IU.
Table 1.
Source of recordings and transcriptions | Number of recordings | Audio duration (min) | Number of IUs | Number of speakers with > 5 IUs | % IUs following inter-IU interval < 1 s (%) |
---|---|---|---|---|---|
Santa Barbara Corpus of Spoken American English | 10 | 9:58 | 460 | 19 | 78.3 |
Haifa Corpus of Spoken Hebrew | 10 | 6:30 | 507 | 24 | 80.9 |
Russian multichannel discourse | 3 | 60:26 | 3078 | 9 | 77.2 |
DoBeS Summits-PAGE Collection of Papuan Malay | 20 | 64:07 | 2995 | 33 | 89.1 |
DoBeS Wooi documentation | 12 | 34:59 | 1033 | 18 | 64.6 |
DoBes Yali documentation | 5 | 13:25 | 561 | 10 | 80.2 |
Phase-consistency analysis
We analyzed the relation between IU onsets and the speech envelope using a point-field synchronization measure, adopted from the study of rhythmic synchronization of neural spiking activity and Local Field Potentials40. In this analysis, the rhythmicity of IU sequences is measured through the phase consistency of IU onsets with respect to the periodic components of the speech envelope (Fig. 1c). The speech envelope is a representation of speech that captures amplitude fluctuations in the acoustic speech signal. The envelope is most commonly understood to reflect the succession of syllables at ~ 5 Hz, and indeed, it includes strong 2–7 Hz modulations4,6–8. The vocal nuclei of syllables are the main source of envelope peaks, while syllable boundaries are the main source of envelope troughs (Fig. 1b). IU onsets can be expected to coincide with troughs in the envelope, since each IU onset is necessarily also a syllable boundary. Therefore, one can expect a high phase consistency between IU onsets and the frequency component of the speech envelope corresponding to the rhythm of syllables, at ~ 5 Hz.
A less trivial finding would be a high phase consistency between IU onsets and other periodic components in the speech envelope. Specifically, since IUs typically include more than one syllable, such an effect would pertain to frequency components below ~ 5 Hz. In this analysis we hypothesized that the syllable organization within IUs gives rise to slow periodic components in the speech envelope. If low-frequency components are negligible in the speech envelope, estimating the phase of the low-frequency components at the time of IU onsets would lead to random phase angles, a result that would translate to low phase consistency (i.e., uniformity). In another scenario, if the speech envelope captures slow rhythmicity in language other than that arising from IUs, different IUs would occur in different phases of the lower frequency components, translating again to low phase consistency. In contrast to these scenarios, finding phase consistency at a degree higher than expected under the null hypothesis would indicate both that the speech envelope captures the rhythmic characteristics of IUs and would characterize the period of this rhythmicity.
We computed the speech envelope for each sound file following standard procedure: in general terms, speech segments were band-pass filtered into 10 bands with cut-off points designed to be equidistant on the human cochlear map. Amplitude envelopes for each band were computed as absolute values of the Hilbert transform. These narrowband envelopes were averaged, yielding the wideband envelope3,8 (see Supplementary Appendix 1 for further details). We extracted 2-s windows of the speech envelope centered on each IU onset, and decomposed them using Fast Fourier Transform (FFT) with a single Hann window, no padding, following demeaning. This yielded phase estimations for frequency components at a resolution of 0.5 Hz. We then measured the consistency in phase of each FFT frequency component across speech segments using the pairwise-phase consistency metric (PPC)40, yielding a consistency spectrum. We calculated consistency spectra separately for each speaker that produced > 5 IUs and averaged the spectra within each language. Note, that the PPC measure is unbiased by the number of 2-s envelope windows entering the analysis40, and likewise that in a turn-taking sequence, it is inevitable that part of the 2-s envelope windows capture speech by more than one participant. We also conducted the analysis using 4-s windows of the speech envelope, allowing for a 0.25 Hz resolution but at the expense of less data entering the analysis. Further information regarding this additional analysis can be found in part 1 of Supplementary Appendix 2 in the Supplementary Information.
Statistical assessment
We assessed the statistical significance of peaks in the average consistency spectra using a randomization procedure (Fig. 1d). Per language, we created a randomization distribution of consistency estimates with 1000 sets of average surrogate spectra. These surrogate spectra were calculated using the speech envelope as before, but with temporally permuted IU onsets that maintained the association with envelope troughs. Troughs are defined by a minimum magnitude of 0.01 (on a scale of 0–1), and with a minimal duration between troughs of 200 ms, as would be expected from syllables, on average. By constraining the temporal permutation of IU onsets, we address the fact that each IU onset is necessarily a syllable onset, and therefore is expected to align with a trough in the envelope. We then calculated, for each frequency, the proportion of consistency estimates (in the 1000 surrogate spectra) that were greater than the consistency estimate obtained for the observed IU sequences. We corrected p-values for multiple comparisons across frequency bins ensuring that on average, False Discovery Rate (FDR) will not exceed 1%41,42.
Results
We studied the temporal structure of IU sequences through their alignment with the periodic components of the speech envelope, using a phase-consistency analysis. We hypothesized that one of the characteristics of IUs – the fast-slow dynamic of syllables—would give rise to slow periodic modulations in the speech envelope. Seeing that IUs analyzed here are informed by a cognitively-oriented linguistic theory and have a wide cross-linguistic validity we hypothesize that comparable periodicity will be found in different languages. Figure 2 displays the observed phase consistency spectra in the six sample languages. IU onsets appear at significantly consistent phases of the low-frequency components of the speech envelope, indicating that their rhythm is captured in the speech envelope, hierarchically above the syllabic rhythm at ~ 5 Hz (English: 0.5–1.5 Hz; Hebrew: 1–1.5, 2.5–3 Hz; Russian: 0.5–3 Hz; Papuan Malay: 0.5–3.5 Hz; Wooi: 0.5–3.5 Hz; and Yali: 0.5–4 Hz, all p’s < 0.001). Of note, the highest phase consistency is measured at 1 Hz in all languages except Hebrew, in which the peak is at the neighboring frequency bin, 1.5 Hz.
To complement the results of the phase-consistency analysis, we estimated the median duration of IUs (Fig. 2, insets). The bootstrapped 95% confidence intervals of this estimate are mostly overlapping, to a resolution of 0.1 s, for all languages but Hebrew. For Hebrew, the median estimate indicates a shorter IU duration, which may underlie a faster rhythm of IU sequences. Note, however, that duration is only a proxy for the rhythmicity of IU sequences, as IUs do not always succeed each other without pause (Fig. S2, insets). We find reassuring the consistent trends in the two analyses, but do not pursue the post-hoc hypothesis that Hebrew deviates from the other languages. In the planned phase consistency analysis, the range of significant frequency components is consistent across languages.
We sought to confirm that this effect was not a result of an amplitude transient at the beginning of IU sequences, or the product of pauses in the recording, that affect the stationarity of the signal and may bias its spectral characterization. To this end, we repeated the analysis, submitting only IUs that followed an inter-IU interval below 1 s, that is, between 65 and 89% of the data, depending on the language (Table 1). The consistency estimates at 1 Hz were still larger than expected under the null hypothesis that IUs lack a definite rhythmic structure (Fig. S2).
Our results are consistent with preliminary characterizations of the temporal structure of IUs17,21,44. The direct time measurements we used obviate the pitfalls of length measurements in word count or syllable count, (e.g.9,10,17,23). The temporal structure of IUs cannot be inferred from reported word counts, because what constitutes a word varies greatly across languages (Box 2). Syllable count per IU may provide an indirect estimation of IU length, especially if variation in syllable duration is taken into account (e.g.45), but it does not capture information about the temporal structure of IU sequences.
Discussion
Neuroscientific studies suggest that neural oscillations participate in segmenting the auditory signal and encoding linguistic units during speech perception. Many studies focus on the role of oscillations in the theta range (~ 5 Hz) and in the gamma range (> 40 Hz). The levels of segmentation attributed to these ranges are the syllable level and the fine-grain encoding of phonetic detail, respectively1. Studies identify also slower oscillations, in the delta range (~ 1 Hz), and have attributed them to segmentation at the level of phrases, both prosodic and formal, such as semantic/syntactic phrases2,3,46–51. Previous studies have consistently demonstrated a decrease in high-frequency neural activity at points of semantic/syntactic completion52 or natural pauses between phrases53. This pattern of activity yields a slow modulation aligned to phrase structure. We harness linguistic theory to offer a conceptual framework for such slow modulations. We quantify the phase consistency between acoustical slow modulations and IU onsets, and demonstrate for the first time that prosodic units with established functions in cognition give rise to a low-frequency rhythm in the auditory signal available to listeners.
A previous study identified low-frequency modulations in the speech acoustics. Analyzing a representation of speech acoustics similar to the speech envelope used here, Tilsen and Arvaniti54 provided a linguistically-informed interpretation that these low frequency modulations are associated with stress elements of speech. There are two important differences between the present study and Tilsen and Arvaniti54. The current work directly interrogates the temporal structure of a linguistically labeled construct—the IU. Furthermore, prosodic prominence structure and IUs contrast on an important issue. While prominence structure differs between languages55, IUs are conceptually universal10. In our characterization of the temporal structure of the IUs we contribute empirical evidence to the conceptual notion of universality. The temporal structure found in the speech envelope around labeled IUs is common to languages of dramatically different prosodic systems (for example, Papuan Malay and Yali in the current work). Finally, unlike IUs, prominence is also marked by different phonetic cues in different languages, so any acoustic analysis focusing on the acoustic correlates of prominence is bound to find cross-linguistic differences.
Previous research in cognitive neuroscience has proposed to dissociate delta activity that represents acoustically-driven segmentation following prosodic phrases from delta activity that represents knowledge-based segmentation of semantic/syntactic phrases56. From the perspective of studying the temporal structure of spontaneous speech, we suggest that the distinction maintained between semantic/syntactic and prosodic phrasing might be superficial. That is because the semantic/syntactic building blocks always appear within prosodic phrases in natural language use57–60. Studies investigating semantic/syntactic building blocks often compare the temporal dynamics of intact grammatical structure to word lists or grammatical structure in an unfamiliar language (e.g.46,49,52). We argue that such studies need to incorporate the possibility that ongoing processing dynamics might reflect perceptual chunking, owing to the ubiquity of prosodic segmentation cues in natural language experience. This possibility is further supported by the fact that theoretically-defined semantic/syntactic boundaries are known to enhance the perception of prosodic boundaries, even when those are artificially removed from the speech segment. In a study that investigated the role of syntactic structure in guiding the perception of prosody in naturalistic speech61, syntactic structure was found to make an independent contribution to the perception of prosodic grouping. Another study equated prosodic boundary strength experimentally (controlling in a parametric fashion word duration, pitch contour, and following-pause duration), and found the same result: semantic/syntactic completion contributed to boundary perception62. Even studies that use visual serial word presentation paradigms rather than auditory stimuli are not immune to an interpretation of prosodically-guided perceptual chunking, which is known to affect silent reading63 (for a review see64).
Independently of whether delta activity in the brain of the listener represents acoustic landmarks, abstract knowledge, or the prosodically-mediated embodiment of abstract knowledge58, our results point to another putative role for slow rhythmic brain activity. We find that orthogonal to different grammatical systems, speakers and speech modes, speakers express their developing ideas at a rate of approximately 1 Hz. Previous studies have shown that in the brains of listeners, a wide network interacts with low-frequency auditory-tracking activity, suggesting an interface of prediction and attention-related processes, memory and the language system2,50,65–67. We expect that via such low-frequency interactions, this same network constraints spontaneous speech production, orchestrating the management and communication of conceptual foci9.
Finally, our findings render plausible several hypotheses within the field of linguistics. At a basic level, the consistent duration of IUs may provide a temporal upper bound to the construal of other linguistic units (e.g., morphosyntactic words).
Supplementary information
Acknowledgements
MI is supported by the Humanities Fund PhD program in Linguistics and the Jack, Joseph and Morton Mandel School for Advanced Studies in the Humanities. Our work would not have been possible without the substantial efforts carried by the creators of the corpora, their teams, and the people they recorded all over the world.
Author contributions
This work is an outcome of author M.I.’s MA thesis under the joint supervision of authors A.N.L. and E.G. The authors were jointly active in all stages of this research and its publication.
Data availability
The custom-written code producing the analyses and figures are available online, in an Open Science Framework repository: https://osf.io/eh3y8/?view_only=6bc102a233914a4db54001345bee944c. As for data, IU time stamps for Hebrew and English can be retrieved from the authors upon request. IU time stamps for the rest of the languages as well as all audio files can be retrieved from the cited corpora.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
is available for this paper at 10.1038/s41598-020-72739-4.
References
- 1.Giraud A-L, Poeppel D. Cortical oscillations and speech processing: Emerging computational principles and operations. Nat. Neurosci. 2012;15:511–517. doi: 10.1038/nn.3063. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Park H, Ince RAA, Schyns PG, Thut G, Gross J. Frontal top-down signals increase coupling of auditory low-frequency oscillations to continuous speech in human listeners. Curr. Biol. 2015;25:1649–1653. doi: 10.1016/j.cub.2015.04.049. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Gross J, et al. Speech rhythms and multiplexed oscillatory sensory coding in the human brain. PLoS Biol. 2013;11:e1001752. doi: 10.1371/journal.pbio.1001752. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Ding N, et al. Temporal modulations in speech and music. Neurosci. Biobehav. Rev. 2017 doi: 10.1016/j.neubiorev.2017.02.011. [DOI] [PubMed] [Google Scholar]
- 5.Räsänen O, Doyle G, Frank MC. Pre-linguistic segmentation of speech into syllable-like units. Cognition. 2018;171:130–150. doi: 10.1016/j.cognition.2017.11.003. [DOI] [PubMed] [Google Scholar]
- 6.Varnet L, Ortiz-Barajas MC, Erra RG, Gervain J, Lorenzi C. A cross-linguistic study of speech modulation spectra. J. Acoust. Soc. Am. 2017;142:1976–1989. doi: 10.1121/1.5006179. [DOI] [PubMed] [Google Scholar]
- 7.Greenberg S, Carvey H, Hitchcock L, Chang S. Temporal properties of spontaneous speech—A syllable-centric perspective. J. Phon. 2003;31:465–485. [Google Scholar]
- 8.Chandrasekaran C, Trubanova A, Stillittano S, Caplier A, Ghazanfar AA. The natural statistics of audiovisual speech. PLoS Comput. Biol. 2009;5:e1000436. doi: 10.1371/journal.pcbi.1000436. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Chafe W. Discourse, Consciousness and Time: The Flow and Displacement of Conscious Experience in Speaking and Writing. Chicago: University of Chicago Press; 1994. [Google Scholar]
- 10.Himmelmann NP, Sandler M, Strunk J, Unterladstetter V. On the universality of intonational phrases: A cross-linguistic interrater study. Phonology. 2018;35:207–245. [Google Scholar]
- 11.Du Bois, J. W. et al. Santa Barbara Corpus of Spoken American English, Parts 1–4. https://www.linguistics.ucsb.edu/research/santa-barbara-corpus (2005).
- 12.Du Bois JW, Cumming S, Schuetze-Coburn S, Paolino D. Discourse Transcription Santa Barbara Papers in Linguistics. Santa Barbara: University of California; 1992. [Google Scholar]
- 13.Shattuck-Hufnagel S, Turk AE. A prosody tutorial for investigators of auditory sentence processing. J. Psycholinguist. Res. 1996;25:193–246. doi: 10.1007/BF01708572. [DOI] [PubMed] [Google Scholar]
- 14.Cruttenden A. Intonation. Cambridge: Cambridge University Press; 1997. [Google Scholar]
- 15.Seifart F, et al. The Extent and Degree of Utterance-Final Word Lengthening in Spontaneous Speech from Ten Languages. Vanguard: Linguist; 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Keating P, Cho T, Fougeron C, Hsu C-S. Domain-initial articulatory strengthening in four languages. In: Local J, Ogden R, Temple R, editors. Phonetic Interpretation: Papers in Laboratory Phonology VI. Cambridge: Cambridge University Press; 2003. pp. 145–163. [Google Scholar]
- 17.Jun S. Prosodic typology. In: Jun S-A, editor. Prosodic Typology: The Phonology of Intonation and Phrasing. Cambridge: Oxford University Press; 2005. pp. 430–458. [Google Scholar]
- 18.Ladd DR. Intonational Phonology. Cambridge: Cambridge University Press; 2008. [Google Scholar]
- 19.Selting, M. et al. A system for transcribing talk-in-interaction: GAT 2. Translated and adapted for English by Elizabeth Couper-Kuhlen and Dagmar Barth-Weingarten. Gesprächsforschung—Online-Zeitschrift zur verbalen Interaktion, vol. 12, 1–51 https://www.gespraechsforschung-ozs.de/heft2011/px-gat2-englisch.pdf (2011).
- 20.Halliday MAK. Intonation and Grammar in British English. Berlin: DE GRUYTER; 1967. [Google Scholar]
- 21.Chafe W. Cognitive constraints on information flow. In: Tomlin RS, editor. Coherence and Grounding in Discourse. Amsterdam: John Benjamins Publishing Company; 1987. pp. 21–51. [Google Scholar]
- 22.Du Bois JW. The discourse basis of ergativity. Language. 1987;63:805–855. [Google Scholar]
- 23.Pawley A, Syder FH. The one-clause-at-a-time hypothesis. In: Riggenbach H, editor. Perspectives on Fluency. Ann Arbor: University of Michigan Press; 2000. pp. 163–199. [Google Scholar]
- 24.Ono T, Thompson SA. What can conversation tell us about syntax? In: Davis PW, editor. Alternative Linguistics: Descriptive and Theoretical Modes. Amsterdam: John Benjamins Publishing Company; 1995. pp. 213–272. [Google Scholar]
- 25.Selting M. Prosody in interaction: State of the art. In: Barth-Weingarten D, Reber E, Selting M, editors. Prosody in interaction. Amsterdam: John Benjamins Publishing Company; 2010. pp. 3–40. [Google Scholar]
- 26.Szczepek-Reed B. Intonation phrases in natural conversation: A participants’ category? In: Barth-Weingarten D, Reber E, Selting M, editors. Prosody in Interaction. Amsterdam: John Benjamins Publishing Company; 2010. pp. 191–212. [Google Scholar]
- 27.Bögels S, Torreira F. Listeners use intonational phrase boundaries to project turn ends in spoken interaction. J. Phon. 2015;52:46–57. [Google Scholar]
- 28.Ford CE, Thompson SA. Interactional units in conversation: Syntactic, intonational, and pragmatic resources for the management of turns. In: Ochs E, Schegloff EA, Thompson SA, editors. Interaction and Grammar. Cambridge: Cambridge University Press; 1996. pp. 134–184. [Google Scholar]
- 29.Gravano A, Hirschberg J. Turn-taking cues in task-oriented dialogue. Comput. Speech Lang. 2011;25:601–634. [Google Scholar]
- 30.Mondada L. Multiple temporalities of language and body in interaction: Challenges for transcribing multimodality. Res. Lang. Soc. Interact. 2018;51:85–106. [Google Scholar]
- 31.Haspelmath M. The indeterminacy of word segmentation and the nature of morphology and syntax. Folia Linguist. 2011;45:31–80. [Google Scholar]
- 32.Chafe W. A Grammar of the Seneca Language. Berkeley: University of California Press; 2015. [Google Scholar]
- 33.Haspelmath M. Pre-established categories don’t exist: Consequences for language description and typology. Linguist Typol. 2007;11:119–132. [Google Scholar]
- 34.Evans N, Levinson SC. The myth of language universals: Language diversity and its importance for cognitive science. Behav. Brain Sci. 2009;32:429–448. doi: 10.1017/S0140525X0999094X. [DOI] [PubMed] [Google Scholar]
- 35.Maschler, Y. et al. The Haifa Corpus of Spoken Hebrew. https://weblx2.haifa.ac.il/~corpus/corpus_website/ (2017).
- 36.Kibrik, A. A. et al. Russian Multichannel Discourse. https://multidiscourse.ru/main/?en=1 (2018).
- 37.Himmelmann, N. P. & Riesberg, S. The DoBeS Summits-PAGE Collection of Papuan Malay 2012–2016. https://hdl.handle.net/1839/00-0000-0000-0019-FF78-5 (2016).
- 38.Kirihio, J. K. et al. The DobeS Wooi Documentation 2009–2015. https://hdl.handle.net/1839/00-0000-0000-0014-C76C-1 (2015).
- 39.Riesberg, S., Walianggen, K. & Zöllner, S. The Dobes Yali Documentation 2012–2016. https://hdl.handle.net/1839/00-0000-0000-0017-EA2D-D (2016).
- 40.Vinck M, van Wingerden M, Womelsdorf T, Fries P, Pennartz CMA. The pairwise phase consistency: A bias-free measure of rhythmic neuronal synchronization. Neuroimage. 2010;51:112–122. doi: 10.1016/j.neuroimage.2010.01.073. [DOI] [PubMed] [Google Scholar]
- 41.Genovese CR, Lazar NA, Nichols T. Thresholding of statistical maps in functional neuroimaging using the false discovery rate. Neuroimage. 2002;15:870–878. doi: 10.1006/nimg.2001.1037. [DOI] [PubMed] [Google Scholar]
- 42.Yekutieli D, Benjamini Y. The control of the false discovery rate in multiple testing under dependency. Ann. Stat. 2001;29:1165–1188. [Google Scholar]
- 43.DiCiccio TJ, Efron B. Bootstrap confidence intervals. Stat. Sci. 1996;11:189–228. [Google Scholar]
- 44.Chafe W. Thought-based Linguistics. Cambridge: Cambridge University Press; 2018. [Google Scholar]
- 45.Silber-Varod, V. & Levy, T. Intonation unit size in Spontaneous Hebrew: Gender and channel differences. in Proceedings of the 7th International Conference on Speech Prosody, 658–662 (2014).
- 46.Ding N, Melloni L, Zhang H, Tian X, Poeppel D. Cortical tracking of hierarchical linguistic structures in connected speech. Nat. Neurosci. 2016;19:158–164. doi: 10.1038/nn.4186. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Meyer L, Henry MJ, Gaston P, Schmuck N, Friederici AD. Linguistic bias modulates interpretation of speech via neural delta-band oscillations. Cereb. Cortex. 2016;27:4293–4302. doi: 10.1093/cercor/bhw228. [DOI] [PubMed] [Google Scholar]
- 48.Bourguignon M, et al. The pace of prosodic phrasing couples the listener’s cortex to the reader’s voice. Hum. Brain Mapp. 2013;34:314–326. doi: 10.1002/hbm.21442. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Bonhage CE, Meyer L, Gruber T, Friederici AD, Mueller JL. Oscillatory EEG dynamics underlying automatic chunking during sentence processing. Neuroimage. 2017;152:647–657. doi: 10.1016/j.neuroimage.2017.03.018. [DOI] [PubMed] [Google Scholar]
- 50.Keitel A, Ince RAA, Gross J, Kayser C. Auditory cortical delta-entrainment interacts with oscillatory power in multiple fronto-parietal networks. Neuroimage. 2017;147:32–42. doi: 10.1016/j.neuroimage.2016.11.062. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Teng X, et al. Constrained structure of Ancient Chinese poetry facilitates speech content grouping. Curr. Biol. 2020;30:1299–1305. doi: 10.1016/j.cub.2020.01.059. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Nelson MJ, et al. Neurophysiological dynamics of phrase-structure building during sentence processing. Proc. Natl. Acad. Sci. 2017;114:E3669–E3678. doi: 10.1073/pnas.1701590114. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Hamilton LS, Edwards E, Chang EF. A spatial map of onset and sustained responses to speech in the human Superior Temporal Gyrus. Curr. Biol. 2018;28:1860–1871. doi: 10.1016/j.cub.2018.04.033. [DOI] [PubMed] [Google Scholar]
- 54.Tilsen S, Arvaniti A. Speech rhythm analysis with decomposition of the amplitude envelope: Characterizing rhythmic patterns within and across languages. J. Acoust. Soc. Am. 2013;134:628–639. doi: 10.1121/1.4807565. [DOI] [PubMed] [Google Scholar]
- 55.Hayes B. Diagnosing stress patterns. In: Hayes B, editor. Metrical Stress Theory: Principles and Case Studies. Chicago: University of Chicago Press; 1995. pp. 5–23. [Google Scholar]
- 56.Meyer L. The neural oscillations of speech processing and language comprehension: State of the art and emerging mechanisms. Eur. J. Neurosci. 2017;48:1–13. doi: 10.1111/ejn.13748. [DOI] [PubMed] [Google Scholar]
- 57.Hopper, P. J. Emergent Grammar. in Proceedings of the Thirteenth Annual Meeting of the Berkeley Linguistics Society, vol. 13, 139–157 (1987).
- 58.Kreiner H, Eviatar Z. The missing link in the embodiment of syntax: Prosody. Brain Lang. 2014;137:91–102. doi: 10.1016/j.bandl.2014.08.004. [DOI] [PubMed] [Google Scholar]
- 59.Mithun M. Re(e)volving complexity: ADDING intonation. In: Givón T, Shibatani M, editors. Syntactic Complexity: Diachrony, Acquisition, Neuro-cognition, Evolution. Amsterdam: John Benjamins Publishing Company; 2009. pp. 53–80. [Google Scholar]
- 60.Auer P, Couper-Kuhlen E, Müller F. Language in Time: The Rhythm and Tempo of Spoken Interaction. Oxford: Oxford University Press; 1999. The study of rhythm: Retemporalizing the detemporalized object of linguistic research; pp. 3–34. [Google Scholar]
- 61.Cole J, Mo Y, Baek S. The role of syntactic structure in guiding prosody perception with ordinary listeners and everyday speech. Lang. Cogn. Process. 2010;25:1141–1177. [Google Scholar]
- 62.Buxó-Lugo A, Watson DG. Evidence for the influence of syntax on prosodic parsing. J. Mem. Lang. 2016;90:1–13. doi: 10.1016/j.jml.2016.03.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Fodor JD. Leaning to parse? J. Psycholinguist. Res. 1998;27:285–319. [Google Scholar]
- 64.Breen M. Empirical investigations of the role of implicit prosody in sentence processing. Lang. Linguist. Compass. 2014;8:37–50. [Google Scholar]
- 65.Kayser SJ, Ince RAA, Gross J, Kayser C. Irregular speech rate dissociates auditory cortical entrainment, evoked responses, and frontal alpha. J. Neurosci. 2015;35:14691–14701. doi: 10.1523/JNEUROSCI.2243-15.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Schroeder CE, Lakatos P. Low-frequency neuronal oscillations as instruments of sensory selection. Trends Neurosci. 2009;32:9–18. doi: 10.1016/j.tins.2008.09.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Piai V, et al. Direct brain recordings reveal hippocampal rhythm underpinnings of language processing. Proc. Natl. Acad. Sci. 2016;113:11366–11371. doi: 10.1073/pnas.1603312113. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The custom-written code producing the analyses and figures are available online, in an Open Science Framework repository: https://osf.io/eh3y8/?view_only=6bc102a233914a4db54001345bee944c. As for data, IU time stamps for Hebrew and English can be retrieved from the authors upon request. IU time stamps for the rest of the languages as well as all audio files can be retrieved from the cited corpora.