Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Feb 16.
Published in final edited form as: Curr Dir Psychol Sci. 2021 Oct 13;30(6):459–467. doi: 10.1177/09637214211037665

THE DEVELOPMENT OF COMMUNICATION ACROSS TIMESCALES

Elise A Piazza 1,2,3, Mira L Nencheva 1, Casey Lew-Williams 1
PMCID: PMC8849573  NIHMSID: NIHMS1726504  PMID: 35177881

Abstract

How do young children learn to organize the statistics of communicative input across milliseconds and months? Developmental science has made progress in understanding how infants learn patterns in language and how infant-directed speech is engineered to ease short-timescale processing, but less is known about how they link perceptual experiences across multiple levels of processing within an interaction (from syllables to stories) and across development. In this article, we propose that three domains of research – statistical summary, neural processing hierarchies, and neural coupling – will be fruitful in uncovering the dynamic exchange of information between children and adults, both in the moment and in aggregate. In particular, we discuss how the study of brain-to-brain and brain-to-behavior coupling between children and adults will further our understanding of how children’s neural representations become aligned with the increasingly complex statistics of communication across timescales.

INTRODUCTION

When adults and young children communicate, they exchange information across milliseconds, seconds, and minutes. Statistics of these exchanges accumulate through diverse interactions across hours, days, and months, with consequences for children’s cognition that last for many years. Children are tasked not only with integrating communicative input across the first set of timescales above (e.g., connecting related words into meaningful sentences and narratives), but also with aggregating experiences across many interactions (Gogate & Hollich, 2010; McMurray, 2016; Altmann, 2017). However, developmental experiments have often focused on relatively shorter-timescale processing and learning (e.g., sounds, words, and sentences), rather than on how information becomes integrated into a larger narrative context and across multiple naturalistic interactions.

In this paper, we call for the merging of three complementary frameworks that will enable exciting progress in understanding how young children organize the statistics of their communicative input across timescales: statistical summary, neural processing hierarchies, and neural coupling. We will primarily focus on how recent theoretical and methodological advances in neuroscience and psychology can provide insight into children’s integration of complex, naturalistic input within single interactions (i.e., across timescales of milliseconds, seconds, and minutes). This confluence of ideas will facilitate the investigation of novel lines of scientific inquiry: how children integrate lower-level information into gist-like summaries (e.g., how they aggregate syllables, words, and sentences into concepts and narratives), how adults package information when speaking to children in a way that facilitates this integration, and how patterns of adult-child coupling support the development of communication, including language processing, improvisatory play, and collaborative problem-solving. In the final section, we propose an expansion of these approaches to the study of long-term development, which unfolds over timescales of days to years.

INTEGRATING THE STATISTICS OF COMMUNICATIVE INPUT ACROSS LEVELS OF COMPLEXITY AND ACROSS TIME

To learn language from complex, multisensory input, infants must extract statistical representations over time. This process has been described as a form of invariance detection (Gogate & Hollich, 2010) in which infants pick up on relatively stable patterns in their caregivers’ input. Emergentist models of this process (Altmann, 2017) suggest that infants accumulate knowledge by gradually transforming the sensorimotor details of individual episodes (e.g., the word “kitty” spoken with different prosodic contours and referring to various real, stuffed, and cartoon cats) into higher-order statistical representations. Such models help explain widespread evidence that infants infer novel word boundaries via transitional probabilities between syllables (Saffran, 2020; Gómez, 2002), aggregate across multiple, individually ambiguous trials to learn word-referent mappings (Smith & Yu, 2008), and use the distributional statistics of speech sounds to guide their perception (Maye et al., 2002).

These characterizations of infant processing and learning dovetail nicely with distinct but highly related literature on statistical summary processing in adult perception (Whitney & Yamanashi Leib, 2018; Zhao et al., 2011), which has primarily been studied at very short timescales (milliseconds to seconds). For example, adults can precisely estimate the average pitch from a sequence of auditory tones, even when they struggle to report information about individual tones (Piazza et al., 2013). This compression of local details of input into a more abstract, compact representation, or “gist”, is thought to contribute to the efficient recognition and retention of sounds (McDermott et al., 2013).

Unlike typical studies of infant and adult statistical processing, everyday interactions are not neatly divided into learning and test phases. Rich, real-life learning requires a constant formation of summary representations in real-time and across parallel levels of processing. To navigate the real-time dynamics of speech comprehension and production during early interactions with caregivers, infants and toddlers must perform statistical computations for each of several features (e.g., prosody, semantic meaning) and timescales in parallel, while integrating information across these levels (Figure 1A). While the pitch height of an individual word might provide a cue to its momentary salience, pitch variability across a sentence might indicate a parent’s emotional tone. Similarly, the average semantic feature vector across a set of descriptions of a character contributes to its overall identity by the end of a conversation or story.

Figure 1.

Figure 1.

Schematic diagrams of three levels of communicative processing, described in terms of (a) statistical summary and (b) neural hierarchies. (a) Statistical summary representations of the word cat, split into three levels of processing. At the single-word level (bottom panel), cat is represented in terms of its acoustic features (e.g., pitch contour) and component phonemes. At the sentence level (middle panel), the word is integrated into its nearby context, including the surrounding words and their pitch contour (which indicates a question here). At the narrative level (top panel), the word is processed in terms of a full story arc about a lost cat, which contains several events whose gist is summarized in each of the four circles, surrounded by the local details that comprise them. (b) Example depiction of neural entrainment to each of the three levels in three age groups: adult, child, and infant. Early sensory regions, such as primary auditory cortex or A1 (bottom), which process fast dynamics of language (syllables, words), might be fairly well synchronized across age groups. However, higher-order, default mode network regions (top), which integrate over longer timescales, might be synchronized across multiple adults but not across members of the three age groups.

A developmental understanding of this computation and integration of statistics across multiple levels of processing would benefit from neuroscientific investigations of the representations of summary statistics during real-life communicative exchanges. For instance, while the auditory cortex represents the local acoustic features of speech with relatively fine temporal detail (e.g., pitches of individual syllables, differences between the sounds “cat” and “kitty”), regions further downstream must integrate statistics over longer timescales (e.g., the difference in overall contour between questions and sentences), and even higher-order regions likely suppress these acoustic details in the service of higher-level semantic representations that unfold over longer timescales (e.g., a story arc about a lost cat, which summarizes across many local cat-related words and phrases; Lerner et al., 2011). However, the neural encoding of these increasingly long-timescale representations – and how they collapse relatively finer details in long-term memory – is underexplored. Experimental paradigms that track children’s integration of such gist-like summaries over multiple timescales will provide a useful model of language processing that spans low-level perceptual averaging and higher-level summary representations of communicative information. Recent models of how the adult brain hierarchically processes the structure of communicative input across multiple timescales, discussed in the section below, could provide a key framework for tracking this integration of statistics at different developmental stages.

HIERARCHICAL PROCESSING OF COMMUNICATIVE STATISTICS IN THE BRAIN

The auditory systems of many communicative species are structured hierarchically, reflecting organization of natural sounds from simple to relatively complex (Margoliash & Fortune, 1992). In humans, these levels likely result from parallel computations of incoming speech at shorter timescales (e.g., sounds and syllables) and longer timescales (sentence constructions and entire narrative arcs). Echoing hierarchical models of language processing from cognitive science (e.g., McClelland & Elman, 1986), recent neuroscience research has advanced our understanding of how the adult brain hierarchically organizes the successive building blocks of language and communication.

To understand how the brain processes information at different timescales, researchers have used experimental designs that disrupt meaning at one or more timescales, while preserving meaning in others. One MEG study using this approach (Ding et al., 2016) found that distinct frequencies of neural activity entrain to different timescales of meaningful linguistic information, and coupling between these frequencies is thought to coordinate information flow between brain regions that process speech at different levels (Giraud & Poeppel, 2012). Related MEG research has additionally extended this hierarchy to characterize the transition from acoustic to lexical representations (Brodbeck, Hong, & Simon, 2018). Experiments using fMRI have incorporated longer, more naturalistic stimuli and determined the contributions of different brain regions to each timescale of processing. In one study (Lerner et al., 2011), adults listened to a spoken story that had been scrambled at the word, sentence, and paragraph level. The between-subject reliability (measured using inter-subject temporal correlation; Hasson et al., 2004) of the responses of each brain region reveal its processing of specific timescales. The results revealed a nested neural hierarchy for processing these three levels of complexity, spanning from primary auditory cortex (which responded reliably even for the word-level scramble condition, indicating that it processes the local, moment-to-moment details of speech), to higher-order, default mode network regions (precuneus, frontal cortex), which responded reliably only to the paragraph-scrambled (and intact) story. This hierarchy extends to musical structure as well (Farbood et al., 2015), suggesting that it supports the extraction of information over windows of various durations beyond the domain of language. In adults, one important feature of long-timescale brain regions is that they represent holistic, gist-like information, and thus respond similarly even when the low-level details of a stimulus are changed (e.g., when a story is translated into a bilingual’s other language; Honey et al., 2012; Yeshurun et al., 2021).

Little is known about how children accumulate complex details into a narrative or concept. By expanding the range of timescales in studies of children’s language processing, we will be positioned to learn how the richness and structure of children’s representations evolve over time (Figure 1B). For example, it may be the case that early in infancy, the default mode network processes shorter-timescale speech input, and only gradually converges onto mature, longer-timescale narrative-level representations. Inter-subject correlation between multiple children’s brains, in response to stimuli whose meaning is disrupted at different timescales, will help determine how they progress from representing smaller to larger units of language. Neural decoding approaches have the power to reveal the richness of children’s representations of input (e.g., patterns of voxel activation reflecting the shape of a toy, its category identity, or its overall role in a story), thereby providing insights into the processing of information at different levels of complexity that may or may not be apparent in children’s behaviors. It would also be useful to examine how neural hierarchies vary across children and across age groups, and whether this variability is meaningfully related to their deployment of different processing mechanisms (e.g., prediction) in different contexts. Relatedly, behavioral paradigms may inform us about how infants’ attention and memory systems become increasingly capable of tracking hierarchically nested information (e.g., Bauer & Mandler, 1989; Rosenberg & Feigenson, 2013) and how parents package their language to help children build representational complexity (Schwab & Lew-Williams, 2016). Given that adults must ultimately communicate information across several representational levels to children, new measures of the development of neural processing hierarchies will enable understanding of how adults and children jointly coordinate the exchange of information across timescales.

COUPLING PROVIDES INSIGHTS INTO THE REAL-TIME TRANSFER OF REPRESENTATIONS ACROSS TIMESCALES

As infants and toddlers exchange information with their caregivers in real time and across time, they somehow progress toward mature hierarchical representation of words, sentences, and narratives. How does each party make dynamic adjustments in order to align their representations of communicative content? In particular, how do caregivers accommodate the limitations of infant cognition? For example, when two adults see a dark cloud, their shared understanding of weather allows them to predict that rain is likely. While an adult and infant may have shared perceptual representations of a dark cloud, the infant’s brain may lack the knowledge to predict upcoming rain, which relies on rich semantic associations stored in long-term memory. The adult will tend to provide scaffolds for such predictive leaps for the infant (e.g., “Uh-oh! I think we need an umbrella!”). Many related instances of such experiences over time may help the infant build longer-timescale representations (see Figure 2).

Figure 2.

Figure 2.

Cartoon example of a communicative interaction between a mother and child during real-life play. Red and blue curves (bottom-right) depict possible neural time series from one early sensory brain region (primary auditory cortex, or A1) and a network of higher-order regions (default mode network, or DMN). As they progress through the interaction, the dyad flows through several states of neural coupling in each region. When they are not interacting (including no shared sensory input), their brains are uncoupled in both regions. When they are hearing the same speech, or viewing the same object, shared input drives sensory coupling. Frequently, the adult anticipates predictable content before the child does, since the adult has access to richer semantic associations and narrative schemas than the child. This happens both at relatively short timescales (e.g., a dark cloud will be followed by rain) and longer ones (e.g., a canonical ending to the “lost cat” schema is that the cat is stuck in a tree). Sometimes, the child reroutes the conversation in a surprising way (e.g., a kangaroo will help find the lost cat), and the adult adapts to this detour. These are all examples of leader-follower dynamics, facilitated by behaviors that guide the other person toward a joint state of understanding. Whenever the adult and child converge on that joint state (e.g., they represent rain in a related way), there is mirrored coupling between them. By the end of the interaction, each person has dynamically adapted to the other to create this story, so the interaction as a whole represents synergistic coupling.

Coupled interactions between infants and caregivers have been investigated behaviorally for a long time, yielding many insights about how caregivers tailor their speech and how infants actively contribute to multimodal communicative exchanges (Fernald et al., 1989; Piazza et al., 2017; Schwab & Lew-Williams, 2016; Soderstrom, 2007). Caregivers tailor their communication in ways that are initially optimized for shorter-timescale processing (McMurray, 2016), but over time they increase the complexity of their words and utterances (Schwab et al., 2018). They also change their behavior in response to infants’ attentional focus; for example, they provide appropriately timed labels (Suanda et al. 2019) and try to align attention onto the same object (Suarez-Rivera et al. 2019). Over time, caregivers’ and infants’ behaviors become increasingly contingent on each other both within language (Abney et al., 2017; Hirsh-Pasek et al., 2015) and across domains, such that their gestures, gaze directions, and speech acts influence one another in a back-and-forth manner (Goldstein & Schwade, 2008; Gros-Luis et al., 2006). Thus, successful communication relies on mirrored behaviors and representations, as well as on non-mirrored, contingent responses.

While behavioral coupling highlights the interplay between adults’ and children’s outward actions, neural (brain-to-brain) coupling provides unique access to the alignment of inner mental representations that are not always behaviorally measurable, especially in prelinguistic infants. In adults, neural coupling between a speaker and listener during storytelling predicts listeners’ comprehension of the story, thus providing a measure of communication success and information transfer (Stephens et al., 2010). As in research on neural hierarchies, neural coupling in long-timescale regions reflects shared high-level understanding of language and cannot be explained simply by processing the same sensory input at the same time (Honey et al., 2012; Yeshurun et al., 2017; Piazza et al., 2020). Furthermore, coupling has been proposed to play a mechanistic role in learning by ensuring that the learner’s brain enters a phase of high excitability during optimal moments for encoding information (Wass et al., 2020).

Neural coupling has been measured using multiple modalities (fMRI, fNIRS, EEG) and in multiple ways, either between individuals scanned in different sessions while listening to the same story (e.g., Stephens et al., 2010; Liu et al., 2017), or between members of a dyad engaging in an interactive task during the same session (e.g., Piazza et al., 2020; see Wass et al., 2020 for a review of adult-child studies). Although most studies of adult-child neural coupling have focused on mirrored synchrony, or the one-to-one alignment of the neural dynamics between two individuals, it is not always optimal for an adult’s and child’s brain to process the same information in the same way at the same time (e.g., Figure 2, panel 3). Many interactions also involve leading and following (Piazza et al., 2020), or synergistic, mutual adaptation between two individuals (Hasson & Frith, 2016); such patterns of non-mirrored coupling are likely to support improvisational aspects of creative play and problem solving.

The study of neural coupling will illuminate the ways in which adults and children align to each other across different timescales. For example, during communicative interactions, infants’ relatively short-timescale processing may support the formation of sound categories and speech segmentation, while adults’ broad ability to process longer timescales may facilitate sentence-level predictions and semantic/narrative understanding. If so, there may be a pattern of progression from synchrony in early sensory regions to non-mirrored, leader-follower dynamics in higher-order regions (likely with moments of weak neural alignment). Interestingly, the convergence of children’s neural representations with adults’ could be linear across development or could link to particular milestones, such as spikes in vocabulary acquisition, vocal production, or perspective-taking. Another ripe area for future research is the contingency of infant neural representations on adult behaviors. At the neural level, how does high-quality, temporally contingent feedback from an adult improve the sophistication of children’s communicative output in real time (Goldstein et al., 2003)? Does the particular pattern of coupling within a parent-child dyad (e.g., leader-follower, contingency, mutual adaptation) predict communicative success or learning above and beyond each individual’s brain representations? Such investigations will require a widening of the temporal and spatial windows of analysis of coupling, to account for differences in the timing of adult and child neural processes as well as in the brain regions performing complementary communicative functions at different stages.

DEVELOPMENT AS A NATURAL MODEL OF LONG-TIMESCALE INTEGRATION

We have suggested that new, naturalistic experimental paradigms will provide insights into how – within a single interaction – adults and children share their representations of the world, which inherently span different timescales of processing. Borrowing from adult research, the longest timescale we have discussed thus far maps onto extraction of the overall narrative arc of a story or conversation. However, the process of development itself provides the ultimate model of truly long-timescale integration, as children must actively learn to communicate over the course of thousands of interactions. This development-level timescale is the most challenging to study and will benefit from the creative merging of approaches and fields. For example, how do different brain regions perform statistical summary computations to integrate over diverse instances of a word, separated in time and space and articulated by multiple people? How do these computations enable a child’s formation of a rich, unified, and usable representation of a concept? How do the types of statistical invariance that adults emphasize—and children attend to—co-evolve throughout across childhood? Development-level investigations will ultimately expand the definition of the neural processing hierarchy beyond within-interaction timescales to include learning and communication across hours, days, and even years, and in doing so may reveal exciting discoveries about the neural processes that support long-term integration. This type of investigation will advance our understanding of a range of cognitive systems; research on the development of memory, for example, currently lacks explanations for how statistical learning is retained over timescales longer than an experimental session (Gómez, 2017; McMurray, Horst, & Samuelson, 2012), or how default mode network regions communicate with the hippocampus during long-term learning.

Neuroimaging studies that capture fine-grained changes in patterns of brain activity over the course of development, rather than at a single time point in the lab, will serve as powerful tests of well-known models of learning. For example, dynamic systems models (Smith & Thelen, 2003) propose that subtle, short-timescale changes gradually move a system toward a destination, such as the first instance of walking or the production of a new word. The ability to ‘peer under the hood’ throughout these processes will help to illuminate how changes in children’s and adults’ neural representations during coupled interactions contribute to the emergence of advances in children’s behavior. Finally, understanding the evolution of parent-child coupling during communicative interactions over developmental time could provide key insights about individual differences in learning, as well as critical outcomes such as school readiness.

CONCLUSION

In this paper, we have called for the application and integration of three scientific frameworks to understand the development of children’s ability to process information at multiple timescales. First, statistical summary may provide one mechanism through which a child learns to integrate experiences over time, such as the average of word tokens across contexts or the gist of a story across diverse words and events. Second, studying the development of neural processing hierarchies could help explain how children build up neural representations of communicative information that unfolds over time, from milliseconds to minutes. Third, interactive experiments measuring neural and behavioral coupling between adults and young children will illuminate how adults share their mental representations with children in real time, and how children contribute to their own learning by actively guiding adults’ actions. By studying these phenomena across the longest timescale of development itself, we will be positioned to build an integrative model of how the statistics of adult-infant communication give rise to children’s learning outcomes.

REFERENCES

  1. Abney DH, Warlaumont AS, Oller DK, Wallot S, & Kello CT (2017). Multiple coordination patterns in infant and adult vocalizations. Infancy, 22(4), 514–539. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Altmann GTM (2017). Abstraction and generalization in statistical learning: implications for the relationship between semantic types and episodic tokens. Philosophical Transactions of the Royal Society B: Biological Sciences, 372(1711), 20160060. doi:doi: 10.1098/rstb.2016.0060. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bauer PJ, & Mandler JM (1989). One thing follows another: Effects of temporal structure on 1-to 2-year-olds’ recall of events. Developmental Psychology, 25(2), 197. [Google Scholar]
  4. Brodbeck C, Hong LE, & Simon JZ (2018). Rapid transformation from auditory to linguistic representations of continuous speech. Current Biology, 28, 3976–3983. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Ding N, Melloni L, Zhang H, Tian X, & Poeppel D (2016). Cortical tracking of hierarchical linguistic structures in connected speech. Nature Neuroscience, 19(1), 158. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Farbood MM, Heeger DJ, Marcus G, Hasson U, & Lerner Y (2015). The neural processing of hierarchical structure in music and speech at different timescales. Frontiers in Neuroscience, 9, 157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Fernald A, Taeschner T, Dunn J, Papousek M, Boysson-Bardies B, & Fukui I (1989). A cross-language study of prosodic modifications in mothers’ and fathers’ speech to preverbal infants. Journal of Child Language, 16, 477–501. [DOI] [PubMed] [Google Scholar]
  8. Giraud AL, & Poeppel D (2012). Cortical oscillations and speech processing: emerging computational principles and operations. Nature Neuroscience, 15(4), 511. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Gogate LJ, & Hollich G (2010). Invariance detection within an interactive system: A perceptual gateway to language development. Psychological Review, 117(2), 496–516. doi: 10.1037/a0019049. [DOI] [PubMed] [Google Scholar]; Discusses infants’ detection of statistical regularities in communicative input
  10. Goldstein MH, King AP, & West MJ (2003). Social interaction shapes babbling: Testing parallels between birdsong and speech. Proceedings of the National Academy of Sciences, 100(13), 8030–8035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Goldstein MH, & Schwade JA (2008). Social feedback to infants’ babbling facilitates rapid phonological learning. Psychological Science, 19, 515–523. [DOI] [PubMed] [Google Scholar]
  12. Gómez RL (2002). Variability and detection of invariant structure. Psychological Science, 13, 431–436. [DOI] [PubMed] [Google Scholar]
  13. Gómez RL (2017). Do infants retain the statistics of a statistical learning experience? Insights from a developmental cognitive neuroscience perspective. Philosophical Transactions of the Royal Society B: Biological Sciences, 372, 20160054. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Hasson U, Nir Y, Levy I, Fuhrmann G, & Malach R (2004). Intersubject synchronization of cortical activity during natural vision. Science, 303(5664), 1634–1640. [DOI] [PubMed] [Google Scholar]
  15. Hasson U, & Frith CD (2016). Mirroring and beyond: coupled dynamics as a generalized framework for modelling social interactions. Philosophical Transactions of the Royal Society B: Biological Sciences, 371(1693), 20150366. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Hirsh-Pasek K, Adamson LB, Bakeman R, Tresch Owen M, Golinkoff RM, Pace A, Yust PKS, & Suma K (2015). The contribution of early communication quality to low-income children’s language success. Psychological Science, 26(7), 1071–1083. [DOI] [PubMed] [Google Scholar]
  17. Honey CJ, Thompson CR, Lerner Y, & Hasson U (2012). Not lost in translation: neural responses shared across languages. Journal of Neuroscience, 32(44), 15277–15283. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Lerner Y, Honey CJ, Silbert LJ, & Hasson U (2011). Topographic mapping of a hierarchy of temporal receptive windows using a narrated story. Journal of Neuroscience, 31(8), 2906–2915. [DOI] [PMC free article] [PubMed] [Google Scholar]; An introduction to neural hierarchies for processing increasingly complex information
  19. Margoliash D, & Fortune ES (1992). Temporal and harmonic combination-sensitive neurons in the zebra finch’s HVc. Journal of Neuroscience, 12(11), 4309–4326. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Maye J, Werker JF, & Gerken L (2002). Infant sensitivity to distributional information can affect phonetic discrimination. Cognition, 82, B101–B111. [DOI] [PubMed] [Google Scholar]
  21. McClelland JL, & Elman JL (1986). The TRACE model of speech perception. Cognitive Psychology, 18, 1–86. [DOI] [PubMed] [Google Scholar]
  22. McDermott JH, Schemitsch M, & Simoncelli EP (2013). Summary statistics in auditory perception. Nature Neuroscience, 16(4), 493. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. McMurray B, Horst JS, & Samuelson L (2012). Word learning emerges from the interaction of online referent selection and slow associative learning. Psychological Review, 119(4), 831–877. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. McMurray B (2016). Language at three timescales: The role of real-time processes in language development and evolution. Topics in Cognitive Science, 8(2), 393–407. [DOI] [PMC free article] [PubMed] [Google Scholar]; An overview of developmental processes that unfold across multiple timescales
  25. Piazza EA, Sweeny TD, Wessel D, Silver MA, & Whitney D (2013). Humans use summary statistics to perceive auditory sequences. Psychological Science, 24(8), 1389–1397. [DOI] [PMC free article] [PubMed] [Google Scholar]; Evidence of statistical summary integration in adults’ auditory perception
  26. Piazza EA, Iordan MC, & Lew-Williams C (2017). Mothers consistently alter their unique vocal fingerprints when communicating with infants. Current Biology, 27(20), 3162–3167. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Piazza EA, Hasenfratz L, Hasson U, & Lew-Williams C (2020). Infant and adult brains are coupled to the dynamics of natural communication. Psychological Science, 31(1), 6–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Rosenberg RD, & Feigenson L (2013). Infants hierarchically organize memory representations. Developmental Science, 16(4), 610–621. [DOI] [PubMed] [Google Scholar]
  29. Saffran JR (2020). Statistical language learning in infancy. Child Development Perspectives, 14, 49–54. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Schwab JF, & Lew-Williams C (2016). Language learning, socioeconomic status, and child-directed speech. WIREs Cognitive Science, 7, 264–275. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Schwab JF, Rowe ML, Cabrera N, & Lew-Williams C (2018). Fathers’ repetition of words is coupled with children’s vocabularies. Journal of Experimental Child Psychology, 166, 437–450. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Smith LB & Thelen E (2003). Development as a dynamic system. Trends in Cognitive Sciences, 7, 343–348. [DOI] [PubMed] [Google Scholar]
  33. Smith L & Yu C (2008). Infants rapidly learn word-referent mappings via cross-situational statistics. Cognition, 106, 1558–1568. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Soderstrom M (2007). Beyond babytalk: Re-evaluating the nature and content of speech input to preverbal infants. Developmental Review, 27(4), 501–532. [Google Scholar]
  35. Stephens GJ, Silbert LJ, & Hasson U (2010). Speaker–listener neural coupling underlies successful communication. Proceedings of the National Academy of Sciences, 107, 14425–14430. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Suanda SH, Barnhart M, Smith LB, & Yu C (2019). The signal in the noise: The visual ecology of parents’ object naming. Infancy, 24(3), 455–476. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Suarez-Rivera C, Smith LB, & Yu C (2019). Multimodal parent behaviors within joint attention support sustained attention in infants. Developmental Psychology, 55(1), 96. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Wass SV, Whitehorn M, Haresign IM, Phillips E, & Leong V (2020). Interpersonal neural entrainment during early social interaction. Trends in Cognitive Sciences, 24(4), 329–342. [DOI] [PubMed] [Google Scholar]; A review of adult-child coupling during dyadic interactions
  39. Whitney D, & Yamanashi Leib A (2018). Ensemble perception. Annual Review of Psychology, 69, 105–129. [DOI] [PubMed] [Google Scholar]
  40. Yeshurun Y, Swanson S, Simony E, Chen J, Lazaridi C, Honey CJ, & Hasson U (2017). Same story, different story: the neural representation of interpretive frameworks. Psychological Science, 28, 307–319. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Yeshurun Y, Nguyen M, & Hasson U (2021). The default mode network: where the idiosyncratic self meets the shared social world. Nature Reviews Neuroscience, 22, 181–192. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Zhao J, Ngo N, McKendrick R, & Turk-Browne NB (2011). Mutual interference between statistical summary perception and statistical learning. Psychological Science, 22, 1212–1219. [DOI] [PubMed] [Google Scholar]

RESOURCES