Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobule (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and audio-visual integration. I propose that the primary role of the ADS in monkeys/apes is the perception and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Perception of contact calls occurs by the ADS detecting a voice, localizing it, and verifying that the corresponding face is out of sight. The auditory cortex then projects to parieto-frontal visuospatial regions (visual dorsal stream) for searching the caller, and via a series of frontal lobe-brainstem connections, a contact call is produced in return.
Because the human ADS processes also speech production and repetition, I further describe a course for the development of speech in humans. I propose that, due to duplication of a parietal region and its frontal projections, and strengthening of direct frontal-brainstem connections, the ADS converted auditory input directly to vocal regions in the frontal lobe, which endowed early Hominans with partial vocal control. This enabled offspring to modify their contact calls with intonations for signaling different distress levels to their mother. Vocal control could then enable question-answer conversations, by offspring emitting a low-level distress call for inquiring about the safety of objects, and mothers responding with high- or low-level distress calls. Gradually, the ADS and the direct frontal-brainstem connections became more robust and vocal control became more volitional. Eventually, individuals were capable of inventing new words and offspring were capable of inquiring about objects in their environment and learning their names via mimicry.
Keywords: Speech, Evolution, Auditory dorsal stream, Contact calls, Auditory cortex, Vocal production
1. Introduction
In the past five decades, gorillas, orangutans, chimpanzees and bonobos were shown capable of learning sign language ( Blake, 2004; Gibson, 2011). An important cognitive distinction between the language used by humans and the language used by other apes is with the ability to ask questions. This was first noted by ( Premack & Premack, 1984) who reported that, although their chimpanzee, Sarah, showed no difficulty answering questions or repeating questions before answering them, she never used the question signs for inquiring about her own environment. Jordania (2006), in his review of the literature, noted that other signing apes did not utilize questions and that their initiation of conversations was limited to commands (e.g., “me more eat”) and observational statements (e.g., “bird there”). This absence of a questioning mind is in direct contrast to human toddlers and children, who are renown for their incessant use of questions. My interpretation of this human-ape distinction is that during human evolution, we transitioned from the display of curiosity toward items that are present in our environment (i.e., observational statements) to curiosity toward items that are absent in our environment (i.e., WH questions). Developing curiosity about out of sight events and objects could thus explain the rapid migration of humans across the globe. Furthermore, this curiosity toward the unknown is the driving force behind scientific exploration and technological development. One could hence argue that it is the ability to ask that separates us from other animals and makes the human species unique.
Although no non-human primate has been reported to ask questions, they were reported to exchange calls for monitoring location (i.e., contact calls). For example, when a mother and her infant are physically separated, each emits in turn a call to signal the other their location. This emission of contact calls could therefore be interpreted as akin in meaning to the question “where are you?” If human communication and contact calls are related, it suggests that the preliminary urge to learn about the unknown is derived from infants and mothers seeking to reunite. In the present paper, based on findings collected from brain research, genetics and paleoarcheology, I demonstrate that human speech and contact calls use the same brain structures, and consequently argue that human speech emerged from contact call exchange. I then argue that by modifying their contact calls with intonations, infants were capable of signaling their mothers whether they were under high- or low-level of distress. Given the turn taking nature of these calls, and as both mothers and infants were capable of modifying their calls with intonations, the ability to choose the call type eventuated with the first yes-no conversation structure. In this scenario infants were capable of inquiring about the safety of objects in their environment (i.e., with a low-level distress call) and mothers were capable of responding to that question with a high-level distress call to signal danger or a low-level distress call to signal safety. As the use of intonations became more prevalent, vocal control became more volitional. Eventually, individuals became capable of enunciating novel calls, and the question-answer conversation pattern further evolved for infants inquiring their mothers for the names of objects in their surrounding and then mimicking the mother’s vocal response.
2. Models of language processing in the brain and their relation to language evolution
Throughout the 20 th century, our knowledge of language processing in the brain was dominated by the Wernicke-Lichtheim-Geschwind model ( Geschwind, 1965; Lichtheim, 1885; Wernicke & Tesak, 1974). This model is primarily based on research conducted on brain-damaged individuals who were reported to possess a variety of language related disorders. In accordance with this model, words are perceived via a specialized word reception center (Wernicke’s area) that is located in the left temporoparietal junction. This region then projects to a word production center (Broca’s area) that is located in the left ventrolateral prefrontal cortex. Because almost all language input was thought to funnel via Wernicke’s area and all language output to funnel via Broca’s area, it became extremely difficult to identify the basic properties of each region. This lack of clear definition for the contribution of Wernicke’s and Broca’s regions to human language rendered it extremely difficult to identify their homologues in other primates. (For one attempt, see Aboitiz & García, 1997). With the advent of the MRI and its application for lesion mappings, however, it was shown that this model is based on incorrect correlations between symptoms and lesions and is therefore flawed ( Anderson et al., 1999; Dronkers et al., 1999; Dronkers, 2000; Dronkers et al., 2004; Poeppel et al., 2012; Rauschecker & Scott, 2009 - Supplemental Material; Vignolo et al., 1986). The refutation of such an influential and dominant model opened the door to new models of language processing in the brain, and as will be presented below, to formulating a novel account of the evolutionary origins of human language from a neuroscientific perspective.
In the last two decades, significant advances occurred in our understanding of the neural processing of human auditory processing. In parallel to the refutation of the classical model, comparative studies reported on homologies between the auditory cortices of humans and other primates. Based on histological staining, functional imaging and recordings from the auditory cortex of several primate species, 3 auditory fields were identified in the primary auditory cortex, and 9 associative auditory fields were shown to surround them ( Figure 1 top left; Bendor & Wang, 2006; Kaas & Hackett, 2000 - review; Petkov et al., 2006; Rauschecker et al., 1995). Anatomical tracing and lesion studies further indicated of a separation between the anterior and posterior auditory fields, with the anterior primary auditory fields (areas R-RT) projecting to the anterior associative auditory fields (areas AL-RTL), and the posterior primary auditory field (area A1) projecting to the posterior associative auditory fields (areas CL-CM; la Mothe et al., 2006; Morel et al., 1993; Rauschecker et al., 1997). Recently, evidence accumulated that indicates homology between the human and monkey auditory fields. In humans, histological staining studies revealed two separate auditory fields in the primary auditory region of Heschl’s gyrus ( Sweet et al., 2005; Wallace et al., 2002), and by mapping the tonotopic organization of the human primary auditory fields with high resolution fMRI and comparing it to the tonotopic organization of the monkey primary auditory fields, homology was established between the human anterior primary auditory field and monkey area R (denoted in humans as area hR) and the human posterior primary auditory field and the monkey area A1 (denoted in humans as area hA1; Da Costa et al., 2011; Humphries et al., 2010; Langers & van Dijk, 2012; Striem-Amit et al., 2011; Woods et al., 2010). Intra-cortical recordings from the human auditory cortex further demonstrated similar patterns of connectivity to the auditory cortex of the monkey. Recording from the surface of the auditory cortex (supra-temporal plane) reported that the anterior Heschl’s gyrus (area hR) projects primarily to the middle-anterior superior temporal gyrus (mSTG-aSTG) and the posterior Heschl’s gyrus (area hA1) projects primarily to the posterior superior temporal gyrus (pSTG) and the planum temporale (area PT; Figure 1 top right; Gourévitch et al., 2008; Guéguin et al., 2007). This connectivity pattern is also corroborated by a study that recorded activation from the lateral surface of the auditory cortex and reported of simultaneous non-overlapping activation clusters in the pSTG and mSTG-aSTG while listening to sounds ( Chang et al., 2011).
Downstream to the auditory cortex, anatomical tracing studies in monkeys delineated projections from the anterior associative auditory fields (areas AL-RTL) to the ventrolateral prefrontal cortex (VLPFC; Munoz et al., 2009; Romanski et al., 1999) and amygdala ( Kosmal et al., 1997). Cortical recording and functional imaging studies in macaque monkeys further elaborated on this processing stream by showing that acoustic information flows from the anterior auditory cortex to the temporal pole (TP) and then to the VLPFC ( Perrodin et al., 2011; Petkov et al., 2008; Poremba et al., 2004; Romanski et al., 2004; Russ et al., 2007; Tsunada et al., 2011). This pathway is commonly referred to as the auditory ventral stream (AVS; Figure 1, bottom left-red arrows). In contrast to the anterior auditory fields, tracing studies reported that the posterior auditory fields (areas CL-CM) project primarily to the dorsolateral prefrontal cortex (although some projections do terminate in the VLPFC; Cusick et al., 1995; Romanski et al., 1999). Cortical recordings and anatomical tracing studies in monkeys further provided evidence that this processing stream flows from the posterior auditory fields to the prefrontal cortex via a relay station in the intra-parietal sulcus (IPS; Cohen, 2004; Deacon, 1992; Lewis & Van Essen, 2000; Roberts et al., 2007; Schmahmann et al., 2007; Seltzer & Pandya, 1984). This pathway is commonly referred to as the auditory dorsal stream (ADS; Figure 1, bottom left-blue arrows). Comparing the white matter pathways involved in communication in humans and monkeys with diffusion tensor imaging techniques indicates of similar connections of the AVS and ADS in the two species (Monkey: Schmahmann et al., 2007; Human: Catani et al., 2004; Frey et al., 2008; Makris et al., 2009; Menjot de Champfleur et al., 2013; Saur et al., 2008; Turken & Dronkers, 2011). In humans, the pSTG was shown to project to the parietal lobe (sylvian parietal-temporal junction-inferior parietal lobule; Spt-IPL), and from there to the dorsolateral prefrontal cortex ( Figure 1, bottom right-blue arrows), and the aSTG was shown to project to the anterior temporal lobe (middle temporal gyrus-temporal pole; MTG-TP) and from there to the VLPFC ( Figure 1 bottom right-red arrows).
On the basis of converging evidence collected from monkeys and humans, it has been established that the AVS is responsible for the extraction of meaning from sounds (see appendix A for a review of the literature). Specifically, the anterior auditory cortex is ascribed with the perception of auditory objects, and downstream, the MTG and TP are thought to match the auditory objects with their corresponding audio-visual semantic representations (i.e., the semantic lexicon). This recognition of sounds in the AVS, although critical for intact communication, appears to contribute less to the uniqueness of human language than the ADS. This is demonstrated by the universality of sound recognition, as many mammalian species use it for localizing prey, predators or potential mates. As an example, dogs were reported capable of recognizing spoken words and extract their meaning ( Kaminski et al., 2004; Pilley & Reid, 2011), and with fMRI this sound recognition ability was localized to the TP of the AVS ( Andics et al., 2014). Studies also provided evidence that the sound recognition of non-human apes is equivalent in complexity to ours. Apes trained in human facilities were reported capable of learning human speech and comprehending its meaning (e.g., the bonobos, Kanzi and Panbanisha, were reported to recognize more than 3000 spoken English words; Blake, 2004; Gibson, 2011). Moreover, a study that compared humans and a chimpanzee in their recognition of acoustically distorted spoken words, reported no differences between chimpanzee and human performance ( Heimbauer et al., 2011). Finally, a diffusion tensor imaging study that compared the white matter of humans and chimpanzees demonstrated significant strengthening of ADS connectivity, but not AVS connectivity ( Rilling et al., 2011). This study thus indicates that it is the ADS, and not the AVS, that separates us from our apian relatives.
In contrast to the AVS, the ADS has a diverse range of seemingly unrelated functions that process language. These functions, which will be detailed throughout this paper, include auditory localization, audio-visual integration, and voice detection in monkeys. In humans, the ADS has been further ascribed with speech articulation, speech repetition and perception, and production of linguistic prosody. In the present paper, I interpret the functional differences between the ADS of monkeys and humans as evidence of intermediate stages in the development of human speech.
3. The monkey ADS and its relationship with the visual dorsal stream
The most established role of the ADS is with audio-spatial processing. This is evidenced via studies that recorded neural activity from the auditory cortex of monkeys, and correlated the strongest selectivity to changes in sound location with the posterior auditory fields (areas CM-CL), intermediate selectivity with primary area A1, and very weak selectivity with the anterior auditory fields ( Benson et al., 1981; Miller & Recanzone, 2009; Rauschecker et al., 1995; Woods et al., 2006). In humans, behavioral studies of brain damaged patients ( Clarke et al., 2000; Griffiths et al., 1996) and EEG recordings from healthy participants ( Anourova et al., 2001) demonstrated that sound localization is processed independently of sound recognition, and thus is likely independent of processing in the AVS. Consistently, a working memory study ( Clarke et al., 1998) reported two independent working memory storage spaces, one for acoustic properties and one for locations. Functional imaging studies that contrasted sound discrimination and sound localization reported a correlation between sound discrimination and activation in the mSTG-aSTG, and correlation between sound localization and activation in the pSTG and PT ( Ahveninen et al., 2006; Alain et al., 2001; Barrett & Hall, 2006; De Santis et al., 2006; Viceic et al., 2006; Warren & Griffiths, 2003), with some studies further reporting of activation in the Spt-IPL region and frontal lobe ( Hart et al., 2004; Maeder et al., 2001; Warren et al., 2002). Some fMRI studies also reported that the activation in the pSTG and Spt-IPL regions increased when individuals perceived sounds in motion ( Baumgart et al., 1999; Krumbholz et al., 2005; Pavani et al., 2002). EEG studies using source-localization also identified the pSTG-Spt region of the ADS as the sound localization processing center ( Tata et al., 2005a; Tata et al., 2005b). A combined fMRI and MEG study corroborated the role of the ADS with audio-spatial processing by demonstrating that changes in sound location results in activation spreading from Heschl’s gyrus posteriorly along the pSTG and terminates in the IPL ( Brunetti et al., 2005). In another MEG study, the IPL and frontal lobe were shown active during maintenance of sound locations in working memory ( Lutzenberger et al., 2002).
In addition to localizing sounds, the ADS appears also to encode the sound location in memory, and to use this information for guiding eye movements. Evidence for the role of the ADS in encoding sounds into working memory is provided via studies that trained monkeys in a delayed matching to sample task, and reported of activation in areas CM-CL ( Gottlieb et al., 1989) and IPS ( Linden et al., 1999; Mazzoni et al., 1996) during the delay phase. Influence of this spatial information on eye movements occurs via projections of the ADS into the frontal eye field (FEF; a premotor area that is responsible for guiding eye movements) located in the frontal lobe. This is demonstrated with anatomical tracing studies that reported of connections between areas CM-CL-IPS and the FEF ( Cusick et al., 1995; Stricanne et al., 1996), and electro-physiological recordings that reported neural activity in both the IPS ( Linden et al., 1999; Mazzoni et al., 1996; Mullette-Gillman, 2005; Stricanne et al., 1996) and the FEF ( Russo & Bruce, 1994; Vaadia et al., 1986) prior to conducting saccadic eye-movements toward auditory targets.
In the visual system, it is well established that the inferior temporal lobe processes the identity of visual objects (visual ventral stream; purple arrow in Figure 2), and that the IPS and FEF process visuospatial properties of objects and convert them into appropriate motor behaviors (visual dorsal stream; pink arrow in Figure 2; Goodale & Milner, 1992; Tanné-Gariépy et al., 2002; Ungerleider & Haxby, 1994). Given the dual role of the IPS-FEF pathway in audiospatial and visuospatial processing, it is tempting to assume that spatial processing in both modalities occur in parallel. Cumulating evidence, however, suggests that audiospatial input is first converted into a visuospatial code and then processed via a visuospatial network. In monkeys, electrophysiological studies that recorded activity in the IPS reported that almost all the neurons in this area that are selective for auditory locations, are also selective to visual locations ( Linden et al., 1999; Mazzoni et al., 1996). It was also shown that neurons in the IPS responded first to visual stimuli, and only after training do they become responsive to auditory stimuli ( Grunewald et al., 1999). Retrograde tracing from the IPS revealed that there were much less connections from the auditory cortex (primarily from areas CM-CL) than from the visual cortex ( Lewis & Van Essen, 2000). The encoding of auditory information in visual working memory in the ADS is further demonstrated via a monkey fMRI study that correlated integration of auditory and visual stimuli with activation in the posterior, but not anterior, auditory cortex ( Kayser et al., 2009), and a behavioral working memory study of monkeys that demonstrated audio-visual integration to be susceptible to visual, but not auditory, interferences ( Colombo & Graziano, 1994). Human studies also indicate that the ADS encodes sound locations in visual working memory. For example, an fMRI study that compared cortical activation during a visual motion discrimination task and an auditory motion discrimination task reported of overlapping activation in both modalities in the IPS ( Lewis et al., 2000). A subsequent cross-modal integration task then revealed heightened activation in the IPS that is selective to the combined auditory and visual stimuli. On this account, the researchers endowed the IPS with the role of audio-visual integration of spatial information. An fMRI study that compared the brain areas active during rehearsal of sound locations with rehearsal of visual locations in working memory reported that the IPS is the only region that is always active in both tasks ( Martinkauppi et al., 2000). An fMRI study that contrasted spatial orienting to sounds with spatial orienting to visual objects also reported of overlapping parietal and frontal activation in both tasks ( Smith et al., 2010b). Supporting the maintenance of sound locations in visual working memory in humans is also a study that reported of spatial bias in sound localization while wearing visuospatially distorting goggles (prism goggles; Zwiers et al., 2003). Finally, a working memory study demonstrated that rehearsal of sound locations in working memory is more susceptible to visual interference than auditory interference ( Clarke et al., 1998). In contrast, rehearsal of simple tone sounds in working memory, which in the context of the present model is associated with processing in the AVS, is more susceptible to auditory interference than visual interference.
4. The ADS and the localization of con-specifics
In addition to processing the locations of sounds, evidence suggests that the ADS further integrates sound locations with auditory objects. Demonstrating this integration are electrophysiological recordings from the posterior auditory cortex ( Recanzone, 2008; Tian et al., 2001) and IPS ( Gifford & Cohen, 2005), as well a PET study ( Gil-da-Costa et al., 2006), that reported neurons that are selective to monkey vocalizations. One of these studies ( Tian et al., 2001) further reported neurons in this region (CM-CL) that are characterized with dual selectivity for both a vocalization and a sound location. Consistent with the role of the pSTG-PT in the localization of specific auditory objects are also studies that demonstrate a role for this region in the isolation of specific sounds. For example, two functional imaging studies correlated circumscribed pSTG-PT activation with the spreading of sounds into an increasing number of locations ( Smith et al., 2010a-fMRI; Zatorre et al., 2002-PET). Accordingly, an fMRI study correlated the perception of acoustic cues that are necessary for separating musical sounds (pitch chroma) with pSTG-PT activation ( Warren et al., 2003).
When elucidating the role of the primate ADS in the integration of a sound’s location with calls, it remains to be determined what kind of information the ADS extracts from the calls. This information could be then used to make inferences about the function of the ADS. Studies from both monkeys and humans suggest that the posterior auditory cortex has a role in the detection of a new speaker. A monkey study that recorded electrophysiological activity from neurons in the posterior insula (near the pSTG) reported neurons that discriminate monkey calls based on the identity of the speaker ( Remedios et al., 2009a). Accordingly, human fMRI studies that instructed participants to discriminate voices reported an activation cluster in the pSTG ( Andics et al., 2010; Formisano et al., 2008; Warren et al., 2006). A study that recorded activity from the auditory cortex of an epileptic patient further reported that the pSTG, but not aSTG, was selective for the presence of a new speaker ( Lachaux et al., 2007-patient 1). The role of this posterior voice area, and the manner in which it differs from voice recognition in the AVS ( Andics et al., 2010; Belin & Zatorre, 2003; Nakamura et al., 2001; Perrodin et al., 2011; Petkov et al., 2008), was further shown via electro-stimulation of another epileptic patient ( Lachaux et al., 2007-patient 2). This study reported that stimulation of the aSTG resulted in changes in the perceived pitch of voices (including the patient’s own voice), whereas stimulation of the pSTG resulted in reports that her voice was “drifting away.” This report indicates a role for the pSTG in the integration of sound location with an individual voice. Consistent with this role of the ADS is a study that reported patients with AVS damage but spared ADS (surgical removal of the anterior STG/MTG) were no longer capable of isolating environmental sounds in the contralesional space, whereas their ability of isolating and discriminating human voices remained intact ( Efron et al., 1983). Preliminary evidence from the field of fetal cognition suggests that the ADS is capable of identifying voices in addition to discriminating them. By scanning fetuses of third trimester pregnant mothers with fMRI, the researchers reported of activation in area Spt when the hearing of voices was contrasted to pure tones ( Jardri et al., 2012). The researchers also reported that a sub-region of area Spt was more selective to maternal voice than unfamiliar female voices. Based on these findings, I suggest that the ADS has acquired a special role in primates for the localization of conspecifics.
5. The ADS role in the perception and response to contact calls
To summarize, I have argued that the monkey’s ADS is equipped with the algorithms required for detecting a voice, isolating the voice from the background cacophony, determining its location, integrating the location of this voice into a visuospatial map of the area, and guiding eye movements for the origin of the call. An example of a behavior that utilizes all these functions is the exchange of contact calls, which are used by extant primates to monitor the location or proximity of conspecific tribe members ( Biben et al., 1986; Sugiura, 1998). The utilization of these ADS functions during the exchange of contact calls was demonstrated in studies of squirrel monkeys and vervet monkeys ( Biben, 1992; Biben et al., 1989; Cheney & Seyfarth, 1980; Symmes & Biben, 1985). In both species, mothers showed no difficulty in isolating their own infant’s call, localizing it, and maintaining this location in their memory while approaching the source of the sound. A similar use of contact calls has been documented in our closest relatives, chimpanzees. The exchange of pant-hoot calls was documented between chimpanzees that were separated by great distances ( Goodall, 1986; Marler & Hobbett, 1975) and was used for re-grouping ( Mitani & Nishida, 1993). Because infants respond to their mother’s pant-hoot call with their own unique vocalization (staccato call; Matsuzawa, 2006), the contact call exchange appears also to play an important role in the ability of mothers to monitor the location of their infants. It is also worth noting that when a chimpanzee produced a pant-hoot call and heard no call in response, the chimpanzee was reported to carefully scan the forest before emitting a second call ( Goodall, 1986). This behavior demonstrates the relationship between the perception of contact calls, the embedding of auditory locations in a map of the environment, and the guidance of the eyes for searching the origin of the call. Further corroborating the involvement of the ADS in the perception of contact calls are intra-cortical recordings from the posterior insula (near area CM-A1) of the macaque, which revealed stronger selectivity for a contact call (coo call) than a social call (threat call; Remedios et al., 2009a). Contrasting this finding is a study that recorded neural activity from the anterior auditory cortex, and reported that the proportion of neurons dedicated to a contact call was similar to the proportions of neurons dedicated to other calls ( Perrodin et al., 2011).
Perceiving a contact call can be viewed as a three-step process. The individual is required to detect a voice, integrate it with its location and verify that no face is visible in that location ( Figure 3). In the previous paragraphs, I provided evidence for the involvement of the ADS in the first two stages (voice detection and localization). Evidence for the role of the ADS in the integration of faces with their appropriate calls is provided by a study that recorded activity from the monkey auditory cortex (areas A1 and ML; Ghazanfar, 2005). The monkeys were presented with pictures of a monkey producing a call in parallel to hearing the appropriate call, or only saw the face or heard the call in isolation. Consistent with the prediction from the present model that visual perception of faces inhibits processing of contact calls, the face-call integration was much more enhanced for the social call (grunt call) than for the contact call (coo call). Associating this integration of faces with calls with processing in the ADS is consistent with the evidence presented earlier that ascribe the ADS with audio-visual integration (e.g., a monkey fMRI study correlated audio-visual integration with activation in the posterior, but not in the anterior, auditory fields; Kayser et al., 2009).
Hitherto, I have argued that the ADS is responsible for the perception of contact calls. However, as the perception of a contact call leads to producing a contact call in return, it is also desirable to suggest a pathway through which the ADS mediates vocal production. Monkey studies have demonstrated that the ADS doesn’t directly process vocal production. This was shown through studies that damaged the temporoparietal and/or the VLPFC regions and reported that such lesions had no effect on spontaneous vocal production ( Aitken, 1981; Sutton et al., 1974). This conclusion is also consistent with comprehensive electro-stimulation mappings of the monkey’s brain, which reported no spontaneous vocal production during stimulation of the temporal, occipital, parietal, or frontal lobes ( Jürgens & Ploog, 1970; Robinson, 1967). These studies, however, reported emission of vocalizations after stimulating limbic and brainstem regions (amygdala, anterior cingulate cortex, basal forebrain, hypothalamus, mid-brain periaqueductal gray (PAG)). Moreover, based on a study that correlated chemical activation in the mid-brain PAG with vocal production, it was inferred that all the limbic regions project to central pattern generators in the PAG, which orchestrates the vocal production ( Zhang et al., 1994). In a series of tracing studies and electrophysiological recordings, it was also shown that the PAG projects to pre-motor brainstem areas ( Hage & Jürgens, 2006; Hannig & Jürgens, 2005), which in turn project to brainstem motor nuclei (BMN; green arrows in Figure 2; Holstege, 1989; Holstege et al., 1997; Lüthe et al., 2000; Vanderhorst et al., 2001; Vanderhorst et al., 2000). The BMN then activates the individual muscles of the vocal apparatus. Because documented calls of non-human primates (including chimpanzees) were shown with very little plasticity ( Arcadi, 2000) and were observed only in highly emotional situations ( Goodall, 1986), these limbic-brainstem generated calls are likely more akin to human laughter, sobbing, and screaming than to human speech. In relation to contact calls, a likely candidate for mediating the ADS with the limbic-brainstem vocal network is the VLPFC. This is because electrophysiological recordings ( Cohen, 2004) and anatomical tracing ( Deacon, 1992; Roberts et al., 2007) studies in monkeys demonstrated this region to receive parietal afferents and to project to several limbic structures ( Roberts et al., 2007). Corroborating the role of the VLPFC in mediating vocal production of contact calls are studies that recorded neural activity from the VLPFC of macaques and reported neural discharge prior to cued or spontaneous contact call production (coo calls), but not prior to production of vocalizations-like facial movements (i.e., silent vocalizations; Coudé et al., 2011; see also Gemba et al., 1999 for similar results). Consistently, a study that sacrificed marmoset monkeys immediately after responding to contact calls (phee calls) measured highest neural activity (genomic expression of cFos protein) in the posterior auditory fields (CM-CL), and VLPFC ( Miller et al., 2010). Monkeys sacrificed after only hearing contact calls or only emitting them showed neural activity in the same regions but to a much smaller degree (See also Simões et al., 2010 for similar results in a study using the protein Egr-1). Further supporting this ability of the VLPFC in regulating limbic-brainstem generated calls is the result of a tracing study that reported direct connections between the cortical motor area of the mouth and a brainstem motor nucleus that executes tongue movements (hypoglossal nucleus; Jürgens & Alipour, 2002). Hence, this study suggests that in addition to the VLPFC-AMYG-PAG-BMN pathway (green arrows in Figure 2), a second direct VLPFC-BMN pathway (brown arrow in Figure 2) has evolved in monkeys. The role of this direct VLPFC-BMN pathway is not yet known, but its anatomical connectivity implies that it is capable of bypassing the limbic-brainstem vocal network, and therefore dominates vocal production.
6. Evolutionary origins of vocal control in humans
According to Falk’s evolutionary hypothesis (2004), due to bipedal locomotion and the loss of hair in early Hominins, mothers were not capable of carrying their infants while foraging. As a result, the mothers maintained contact with their infant through a vocal exchange of calls that resemble contemporary “motherese” (the unique set of intonations that caregivers use when addressing infants). Following this model, Masataka (2009) provided evidence that macaque mothers are capable, to a limited extent, of modifying their contact calls to acoustically match those of their infants and further suggested that the human mother-infant prosodic vocal exchange evolved from the exchange of contact calls between our apian ancestors. Support for the use of prosody in contact calls are studies of squirrel monkeys and macaque monkeys that reported of small changes in the frequencies of contact calls, which resulted with the caller and responder emitting slightly different calls ( Biben et al., 1986; Sugiura, 1998). Evidence supporting the transition from contact call expression to volitional speech is provided by a study where macaque monkeys spontaneously learned to modify the vocal properties of their contact call for requesting different objects from the experimenter ( Hihara et al., 2003). Anecdotal reports of more generalized volitional vocal control, albeit rudimentary, in apes ( Hayes & Hayes, 1952; Hopkins et al., 2007; Kalan et al., 2015; Koda et al., 2007; Koda et al., 2012; Lameira et al., 2015; Taglialatela et al., 2003; Wich et al., 2008) further indicates that the ability to modify calls with intonations was enhanced prior to our divergence from our apian relatives.
Supporting evidence for a role of the ADS in the transition from mediating contact calls into mediating human speech includes genetic studies that focused on mutation to the protein SRPX2 and its regulator protein FOXP2 ( Roll et al., 2010). In mice, blockage of SRPX2 or FOXP2 genes resulted in pups not emitting distress calls when separated from their mothers ( Shu et al., 2005; Sia et al., 2013). In humans, however, individuals afflicted with a mutated SRPX2 or FOXP2 were reported with speech dyspraxia ( Roll et al., 2006; Watkins et al., 2002). A PET imaging study of an individual with a mutated SRPX2 gene correlated this patient’s disorder with abnormal activation (hyper-metabolism) along the ADS (pSTG-Spt-IPL; Roll et al., 2006). Similarly, an MRI study that scanned individuals with mutated FOXP2 reported increased grey matter density in the pSTG-Spt and reduced density in the VLPFC, further demonstrating abnormality in ADS‘ structures ( Belton et al., 2003). A role for the ADS in mediating speech production in humans has also been demonstrated in studies that correlated a more severe variant of this disorder, apraxia of speech, with IPL and VLPFC lesions ( Deutsch, 1984; Edmonds & Marquardt, 2004; Hillis et al., 2004; Josephs, 2006; Kimura & Watson, 1989; Square et al., 1997). The role of the ADS in speech production is also demonstrated via a series of studies that directly stimulated sub-cortical fibers during surgical operations ( Duffau et al., 2008-review), and reported that interference in the pSTG and IPL resulted in an increase in speech production errors, and interference in the VLPFC resulted in speech arrest (see also Acheson et al., 2011; Stewart et al., 2001 for similar results using magnetic interference in healthy individuals).
Further support for the transition from contact call exchange to human language are provided by studies of hemispheric lateralization ( Petersen et al., 1978). In one study, Japanese macaques and other old world monkeys were trained to discriminate contact calls of Japanese macaques, which were presented to the right or left ear. Although all the monkeys were capable of completing the task, only the Japanese macaques were noted with right ear advantage, thus indicating left hemispheric processing of contact calls. In a study replicating the same paradigm, Japanese macaques had an impaired ability to discriminate contact calls after suffering unilateral damage to the auditory cortex of the left, but not right, hemisphere ( Heffner & Heffner, 1984). This leftward lateralization of contact call perception is similar to the long established role of the human left hemisphere in the processing human language ( Geschwind, 1965).
Considering Falk’s and Masataka’s hypotheses, evidence also indicates that the ADS was involved in the transition of contact calls into human speech through a transitory prosodic phase. This view is consistent with an fMRI study that instructed participants to rehearse speech, and reported that perception of prosodic speech, when contrasted with flattened speech, results in a stronger activation of the PT-pSTG of both hemispheres ( Meyer et al., 2004). In congruence, an fMRI study that compared the perception of hummed speech to natural speech didn’t identify any brain area that is specific to humming, and thus concluded that humming is processed in the speech network ( Ischebeck et al., 2007). fMRI studies that instructed participants to analyze the rhythm of speech also reported of ADS activation (Spt, IPL, VLPFC; Geiser et al., 2008; Gelfand & Bookheimer, 2003). An fMRI study that compared speech perception and production to the perception and production of humming noises, reported in both conditions that the overlapping activation area for perception and production (i.e., the area responsible for sensory-motor conversion) was located in area Spt of the ADS ( Hickok et al., 2003). Supporting evidence for the role of the ADS in the production of prosody are also studies reporting that patients diagnosed with apraxia of speech are additionally diagnosed with expressive dysprosody ( Odell et al., 1991, Odell et al., 2001; Shriberg et al., 2006 - FOXP2 affected individuals). Finally, the evolutionary account proposed here from vocal exchange of calls to a prosodic-based language is similar to the recent development of whistling languages, since these languages were documented to evolve from exchanging simple calls used to report speakers’ locations into a complex semantic system based on intonations ( Meyer, 2008).
7. Neuroanatomical origins of vocal control
In section 3, I presented evidence that in monkeys audio-spatial input is integrated with visual stimuli in the IPS prior to guiding eye movements. However, human studies that explore the neuroanatomical correlates of inner and outer speeches report that these are processed in a purely auditory network. This was first shown by Conard (1962), who instructed participants to rehearse a sequence of letters and showed that, at recall, they tend to confuse letters that sound similar, but not letters that look similar. Following this discovery, in a series of studies Baddeley & Hitch (1974) demonstrated that simultaneous performance of visual and verbal working memory tasks was nearly as efficient as performance of the same visual or verbal tasks in isolation. In contrast, these researchers showed that simultaneous performance of two separate verbal working memory tasks, or two visual working memory tasks, is less efficient than when performing each task in isolation (e.g., recitation of the alphabet, but not discrimination of faces, interferes with rehearsal of a sequence of digits in working memory). These findings led the researchers to propose the existence of two working memory systems, one for visuospatial material (i.e., the visuospatial sketchpad) and another for verbal material (i.e., the phonological loop). The neuroanatomical correlate of the phonological loop was identified in fMRI studies that compared the activation pattern of individuals when they listened to speech, and the pattern when they produced speech externally or internally ( Buchsbaum et al., 2001; Hickok et al., 2003; Wise et al., 2001). These studies reported that in both speech perception and production (covert or overt) area Spt became active, and thus associated this region with the conversion of auditory stimuli into appropriate articulations. The role of the ADS in verbal rehearsal is in accordance with other functional imaging studies that localized activation to the same region during speech repetition tasks ( Giraud & Price, 2001; Graves et al., 2008; Karbe et al., 1998). An intra-cortical recording study that recorded activity throughout most of the temporal, parietal and frontal lobes also reported of activation during speech repetition in regions along the ADS (areas Spt, IPL and VLPFC; Towle et al., 2008). The association of the ADS with rehearsal is also consistent with neuropsychological studies that correlated the lesion of individuals with speech repetition deficit, but intact auditory comprehension (i.e., conduction aphasia), to the temporoparietal junction ( Axer et al., 2001; Baldo et al., 2008; Bartha & Benke, 2003; Buchsbaum et al., 2011; Fridriksson et al., 2010; Kimura & Watson, 1989; Leff et al., 2009; Selnes et al., 1985), and with studies that applied direct intra-cortical electro-stimulation to this same region and reported of a transient speech repetition deficit ( Anderson et al., 1999; Boatman et al., 2000; Ojemann, 1983; Quigg & Fountain, 1999; Quigg et al., 2006).
In a review discussing the role of the ADS in humans, Warren, et al., (2005) noted that there is similarity in function between the conversion of visual input into eye movements in the IPS, and the conversion of auditory input into articulations in area Spt. Given this dual role of the parietal lobe in sensory-motor transformation of both audio-spatial and verbal information, I propose that during Hominin evolution there was a cortical field duplication, with the IPS (pink asterisk in Figure 2) duplicating to form area Spt (blue asterisk in Figure 2). Such duplication is a common phenomenon in mammalian evolution and was reported in several cortical regions ( Butler & Hodos, 2005). Consequently, because area Spt was closer to the auditory cortex than the IPS, area Spt received the majority of the auditory afferents. Moreover, because of the preexisting connections of the IPS with the VLPFC ( Figure 2 pink arrow), I suggest that the duplication of the IPS resulted in further duplication of its projections to the VLPFC ( Figure 2 blue arrow). The development of connections from the auditory cortex to area Spt, and from there to the VLPFC, thus resulted with a pathway dedicated for audio-vocal conversion. The cortical field duplication hypothesis is consistent with an fMRI study that reported visual and auditory working memory to activate neighboring regions in the VLPFC ( Rämä & Courtney, 2005). Further support for the cortical field duplication comes from research of autism. This is because individuals with autism report that they think in pictures instead of words ( Grandin, 2008; Sahyoun et al., 2009), which implies ADS impairment. This conclusion is also consistent with an fMRI study that reported weaker activation in the IPL and VLPFC of autistic patients than healthy participants ( Just et al., 2007).
Evidence for cortical duplication in the IPL also derives from the fossil record. A study that reconstructed the endocranium of early Hominins noted that Homo habilis, but not any of its Australopith ancestors, is characterized by a dramatic heightening (but not widening) of the IPL and less dramatic enlargement of the VLPFC, whereas the rest of the endocranium remains extremely similar to the endocranium of modern apes ( Tobias, 1987). It is also worth reporting that the recently discovered Australopithecus sediba ( Carlson et al., 2011), which is the closest known relative to the Australopith predecessor of Homo habilis, also has a very ape-like parietal and frontal lobes (although some modifications of the orbitofrontal surface were noted). Based on these findings, I propose that the cortical field duplication in the IPL occurred 2.3–2.5 million years ago and resulted with the brain enlargement that characterizes the Homo genus ( Kimbel et al., 1996; Schrenk et al., 1993; Wood & Baker, 2011). This development equipped early Hominans (i.e., members of the genus Homo; Wood & Richmond, 2000) with partial control of lip and jaw movements, and thus endowed them with sufficient vocal control for modifying innate calls with intonations.
8. Prosodic speech and the emergence of questions
In the opening paragraph of this paper, I described the inability of apes to ask questions, and proposed that the ability to ask questions emerged from contact calls. Because the ability to ask questions likely co-emerged with the ability to modify calls with prosodic intonations, I expand Falk’s and Masataka’s views regarding the prosodic origins of vocal language, and propose that the transition from contact calls to prosodic intonations could have emerged as a means of enabling infants to express different levels of distress ( Figure 4). In such a scenario, the modification of a call with intonations designed to express a high level of distress is akin in meaning to the sentence “mommy, come here now!”. Hence, the modification of calls with intonations could have served as a precursor for the development of prosody in contemporary vocal commands. On the other hand, the use of intonations for expressing a low-level of distress is akin in meaning to the sentence “mommy, where are you?”. Therefore, this use of prosody for asking the first question could have served as the precursor for pragmatically converting calls into questions by using prosody as well. This transition could be related to the ability of present-day infants of using intonations for changing the pragmatic utilization of a word from a statement to a command/demand (“mommy!”) or a question (“mommy?”). Evidence supporting a relationship between the ability to ask questions and the ADS is derived from the finding that patients with phonological dementia, who are known to suffer from degeneration along the ADS and show signs of ADS impairment ( Gorno-Tempini et al., 2008; Rohrer et al., 2010), were impaired in distinguishing whether a spoken word was a question or a statement ( Rohrer et al., 2012).
A possible route for the transition from emitting low-distress contact calls to asking questions is by individuals starting to utilize the former to signal interest about objects in their environment. Given that both contact call exchange and contemporary speech are characterized with turn taking, early Hominans could have responded to the low-level distress calls with either high or low level distress calls. For example, when an infant expressed a low-level distress call prior to eating berries, his/her mother could have responded with a high-level distress call that indicated the food is dangerous or a low-level distress call that indicated the food is safe ( Figure 5). Eventually, the infant emitted the question call and waited for an appropriate answer from their mother before proceeding with their intended action. This conversation structure could be the precursor to present-day yes/no questions. As intonations became more prevalent and questions became more complex, the ADS and VLPFC-BMN pathways (blue and brown arrows in Figure 2) became more robust, and as a consequence individuals acquired more volitional control over the vocal apparatus. Consistent with the role of the ADS in speech repetition (see section 6), the increase in volitional vocal control could have then been used to invent new calls and teach them to offspring via vocal mimicry. This desire of offspring to learn about their environment by mimicking their mother’s calls and then encoding the new words to long-term memory could have been the guiding force that sparked the curiosity to explore the unknown. Discussing the transition from exchanging low-level distress contact calls into complex vocal language, however, is beyond the scope of the present paper and a model for such transition is discussed in length in a sibling paper titled ‘Vocal Mimicry as the Sculptor of the Human Mind. A Neuroanatmically based Evolutionary Model of The Emergence of Vocal Language’ (Poliva, in preparation).
9. Comparisons of the ‘From Where to What’ model to previous language evolution models
Following in the footsteps of Dean Falk and Nobuo Masataka, the present model argues that human speech emerged from the exchange of contact calls via a transitory prosodic phase. Since the principle of natural selection was first acknowledged by the scientific community however, several other accounts of language evolution were proposed. Here, I’ll present two schools of thought, and discuss their validity in the context of the present model.
The earliest model for language evolution was proposed by Charles Darwin. In his book, The Descent of Man (1871), Darwin equated speech exchange to bird song, and proposed that the perception and production of songs during mating rituals were the precursor to human language (singing ape hypothesis). Similar accounts suggesting music to participate in the evolutionary development of speech were also proposed by more recent researchers ( Jordanaia, 2006; Masataka, 2009; Mithen, 2006). However, so far the idea of music as precursor to language has not taken hold in the scientific community due to lack of substantiating evidence. In appendix A, I cite evidence that the perception of melodies occurs in the aSTG of the AVS. Given the mounting evidence indicating that speech is processed primarily in the ADS, we would expect that precursors to speech would also be processed in the same pathway (although, see the review by Stewart, 2006 who suggests roles also for other auditory fields in music perception). Since I hypothesize that singing-like calls were utilized for communication prior to complex vocal language, the idea of music perception and production isn’t too different from the present model. However, arguing that music served as precursor to speech is different than arguing that music and speech emerged from a common proto-function. Investigating whether music served as a precursor to vocal language is problematic since such a model implies that music perception is a unique human trait. Therefore, in order to resolve the conundrum of music evolution and its level of contribution to the emergence of vocal language, future studies should first attempt to determine whether non-human primates can perceive music. (See Remedios et al., 2009b for preliminary findings).
A more recent school of thought argues that language with complex semantics and grammar was first communicated via the exchange of gestures and only recently became vocal (Gestural language model; Arbib, 2008; Corballis, 2010; Donald, 2005; Gentilucci & Corballis, 2006; Hewes, 1973; Studdert-Kennedy, 2005). In accordance with this model, speech could have served for increasing communication distance and enabling communication under low visibility conditions (e.g., night, caves). This model is primarily based on the natural use of gestural communication between non-human primates, the ability of apes to learn sign language, and the natural development of sign languages in deaf communities. This model also received increased popularity since the discovery of mirror neurons, as these neurons are interpreted by proponents of the model as evidence of a mechanism dedicated to the imitation of gestures. From a neuroanatomical perspective it is plausible that vocal communication emerged from gestures. For instance, earlier I presented evidence that in monkeys the IPS encodes auditory stimuli into the visual dorsal stream. Given that the primary function of the visual dorsal stream is to convert visual stimuli into motor actions, it is possible that in addition to vocally responding to contact calls, this pathway served for converting visually observed gestures into producing gestures. This view is also consistent with an fMRI study that correlated hearing animal calls with bilateral activation in the mSTG-aSTG, whereas hearing tool sounds (e.g., hammer, saw) correlated with activation in the pSTG and IPL of the hemisphere contralateral to the dominant hand ( Lewis et al., 2006). This recognition of tool sounds in the ADS instead of AVS is surprising because it could suggest that the teaching of tool use, which required gestures, was associated with speech production. Based on these findings I find the hypothesis that speech and gestures co-evolved compelling. However, given that my model delineates a course for the development of proto-conversations from calls that are used by extant primates, it is incongruent with the argument that a gestural language with complex grammar and semantics preceded vocal language.
10. ‘From Where to What’- Future Research
In the present paper, I delineate a course for the early development of language by proposing five hypotheses: 1. In non-human primates, the ADS is responsible for perceiving and responding to contact calls; 2. The ADS of non-human primates integrates auditory locations into the visual dorsal stream; 3. Duplication of the ape’s IPS and its frontal projections (visual dorsal stream) resulted with a pathway (ADS) dedicated for converting auditory stimuli into articulations; 4. Mother-offspring vocal exchange was the predominant force that guided the emergence of speech in the ADS; 5. Speech emerged from modifying calls with intonations for signaling a low-level and high-level of distress, and these calls are the precursor to our use of intonations for converting words into questions and commands, respectively. Cumulative and converging evidence for the veracity of each of these hypotheses was provided throughout the paper. However, as the veracity of a model can only be measured by its ability to predict experimental results, I will present here outlines for 5 potential studies that can test each of these hypotheses.
In accordance with the first hypothesis, the ADS of non-human primates is responsible for the perception and vocal response to contact calls. A possible way of testing this hypothesis is by inducing bilateral lesions to the temporo-parietal junction of a monkey and then measuring whether the monkey no longer responds vocally to contact calls or responds less than before the lesion induction.
In accordance with the second hypothesis, audiospatial information is integrated in the IPS into visual regions and processed via the visual dorsal stream. This conclusion, although supported by many studies, is primarily derived from the study of Grunewald et al. (1999), who reported that, prior to training, neurons in the IPS responded only to visual stimuli and to auditory stimuli only after training. This study therefore needs to be replicated in more primate species to determine its veracity.
In accordance with the third hypothesis, the Homo genus emerged as a result of duplicating the IPS and its frontal projections. This duplication resulted with area Spt and its projections to the VLPFC. In contrast to the visual dorsal stream that processes audiovisual spatial properties, the human ADS processes inner and outer speech. I therefore predict that fMRI studies scanning participants while rehearsing auditory locations, visual locations and sentences, will find that the first two tasks activate a region more dorsal in the frontal lobe than the latter task.
In accordance with the fourth hypothesis, mother infant interaction was the guiding force that endowed the ADS with its role in speech. This hypothesis is primarily based on the finding that a sub-region of area Spt in human fetuses was shown selective to the voice of their mothers ( Jardri et al., 2012). Future studies should further explore whether this region remains active in the brain of infants and toddlers and whether mothers also possess a region in the ADS that is selective to the voice of their children.
In accordance with the fifth hypothesis, the ADS originally served for discriminating calls that signal different levels of distress by analyzing their intonations. At present day, this development is reflected in our ability to modify intonations for converting spoken words into questions and commands. A way of testing this hypothesis is by using fMRI to compare the brain regions active when participants discriminate spoken words into questions and commands, with the brain regions active when they discriminate these words based on their emotional content (e.g., scared and happy). I predict that the former will activate the ADS whereas the latter the AVS.
Acknowledgments
First, I would like to thank my advisor and mentor, Robert Rafal for his advice, comments and support when writing this paper. I would also like to thank Ben Crossey, Iva Ivanova, Cait Jenkins, Ruth Fishman and Catherine Le Pape for their help with reviewing this paper; and to the editors of American Journal Experts, Journal Prep and NPG language editing for their participation in the editing, proofreading and reviewing of this paper at its different stages.
Funding Statement
The author(s) declared that no grants were involved in supporting this work.
[version 1; referees: 3 approved with reservations]
Appendix A: The auditory ventral stream and its role in sound recognition
Accumulative converging evidence indicates that the AVS is involved in recognizing auditory objects. At the level of the primary auditory cortex, recordings from monkeys showed higher percentage of neurons selective for learned melodic sequences in area R than area A1 ( Yin et al., 2008), and a study in humans demonstrated more selectivity for heard syllables in the anterior Heschl’s gyrus (area hR) than posterior Heshcl’s gyrus (area hA1; Steinschneider et al., 2004). In downstream associative auditory fields, studies from both monkeys and humans reported that the border between the anterior and posterior auditory fields ( Figure 1-area PC in the monkey and mSTG in the human) processes pitch attributes that are necessary for the recognition of auditory objects ( Bendor & Wang, 2006). The anterior auditory fields of monkeys were also demonstrated with selectivity for con-specific vocalizations with intra-cortical recordings ( Perrodin et al., 2011; Rauschecker et al., 1995; Russ et al., 2007) and functional imaging ( Joly et al., 2012; Petkov et al., 2008; Poremba et al., 2004). One fMRI monkey study further demonstrated a role of the aSTG in the recognition of individual voices ( Petkov et al., 2008). The role of the human mSTG-aSTG in sound recognition was demonstrated via functional imaging studies that correlated activity in this region with isolation of auditory objects from background noise ( Scheich et al., 1998; Zatorre et al., 2004) and with the recognition of spoken words ( Binder et al., 2004; Davis & Johnsrude, 2003; Liebenthal, 2005; Narain, 2003; Obleser et al., 2006a; Obleser et al., 2007; Scott et al., 2000), voices ( Belin & Zatorre, 2003), melodies ( Benson et al., 2001; Leaver & Rauschecker, 2010), environmental sounds ( Lewis et al., 2006; Maeder et al., 2001; Viceic et al., 2006), and non-speech communicative sounds ( Shultz et al., 2012). A study that recorded neural activity directly from the left pSTG and aSTG reported that the aSTG, but not pSTG, was more active when the patient listened to speech in her native language than unfamiliar foreign language ( Lachaux et al., 2007-patient 1). Consistently, electro stimulation to the aSTG, but not pSTG, resulted in impaired speech perception ( Lachaux et al., 2007-patient 1; see also Matsumoto et al., 2011 for similar finding). Intra-cortical recordings from the right and left aSTG of another patient further demonstrated that speech is processed laterally to music ( Lachaux et al., 2007-patient 2). Recordings from the anterior auditory cortex of monkeys while maintaining learned sounds in working memory ( Tsunada et al., 2011), and the debilitating effect of induced lesions to this region on working memory recall ( Fritz et al., 2005; Stepien et al., 1960; Strominger et al., 1980), further implicate the AVS in maintaining the perceived auditory objects in working memory. In humans, area mSTG-aSTG was also reported active during rehearsal of heard syllables with MEG ( Kaiser et al., 2003) and fMRI ( Buchsbaum et al., 2005). The latter study further demonstrated that working memory in the AVS is for the acoustic properties of spoken words and that it is independent to working memory in the ADS, which mediates inner speech.
In humans, downstream to the aSTG, the MTG and TP are thought to constitute the semantic lexicon, which is a long-term memory repository of audio-visual representations that are interconnected on the basis of semantic relationships. (See also the reviews by Hickok & Poeppel, 2007 and Gow, 2012, discussing this topic). The primary evidence for this role of the MTG-TP is that patients with damage to this region (e.g., patients with semantic dementia or herpes simplex virus encephalitis) are reported with an impaired ability to describe visual and auditory objects and a tendency to commit semantic errors when naming objects (i.e., semantic paraphasia; Noppeney et al., 2006; Patterson et al., 2007). Semantic paraphasias were also expressed by aphasic patients with left MTG-TP damage ( Dronkers et al., 2004; Schwartz et al., 2009) and were shown to occur in non-aphasic patients after electro-stimulation to this region ( Hamberger et al., 2007) or the underlying white matter pathway ( Duffau et al., 2008). Two meta-analyses of the fMRI literature also reported that the anterior MTG and TP were consistently active during semantic analysis of speech and text ( Binder et al., 2009; Vigneau et al., 2006); and an intra-cortical recording study correlated neural discharge in the MTG with the comprehension of intelligible sentences ( Creutzfeldt et al., 1989).
In contradiction to the Wernicke-Lichtheim-Geschwind model that implicates sound recognition to occur solely in the left hemisphere, studies that examined the properties of the right or left hemisphere in isolation via unilateral hemispheric anesthesia (i.e., the WADA procedure; Hickok et al., 2008) or intra-cortical recordings from each hemisphere ( Creutzfeldt et al., 1989) provided evidence that sound recognition is processed bilaterally. Moreover, a study that instructed patients with disconnected hemispheres (i.e., split-brain patients) to match spoken words to written words presented to the right or left hemifields, reported vocabulary in the right hemisphere that almost matches in size with the left hemisphere ( Zaidel, 1976). (The right hemisphere vocabulary was equivalent to the vocabulary of a healthy 11-years old child). This bilateral recognition of sounds is also consistent with the finding that unilateral lesion to the auditory cortex rarely results in deficit to auditory comprehension (i.e., auditory agnosia), whereas a second lesion to the remaining hemisphere (which could occur years later) does ( Poeppel, 2012; Ulrich, 1978).
References
- Aboitiz F, García VR: The evolutionary origin of the language areas in the human brain. A neuroanatomical perspective. Brain Res Brain Res Rev. 1997;25(3):381–396. 10.1016/S0165-0173(97)00053-2 [DOI] [PubMed] [Google Scholar]
- Acheson DJ, Hamidi M, Binder JR, et al. : A common neural substrate for language production and verbal working memory. J Cogn Neurosci. 2011;23(6):1358–1367. 10.1162/jocn.2010.21519 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ahveninen J, Jaaskelainen IP, Raij T, et al. : Task-modulated “what” and “where” pathways in human auditory cortex. Proc Natl Acad Sci U S A. 2006;103(39):14608–14613. 10.1073/pnas.0510480103 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aitken PG: Cortical control of conditioned and spontaneous vocal behavior in rhesus monkeys. Brain Lang. 1981;13(1):171–184. 10.1016/0093-934X(81)90137-1 [DOI] [PubMed] [Google Scholar]
- Alain C, Arnott SR, Hevenor S, et al. : “What” and “where” in the human auditory system. Proc Natl Acad Sci U S A. 2001;98(21):12301–12306. 10.1073/pnas.211209098 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson JM, Gilmore R, Roper S, et al. : Conduction aphasia and the arcuate fasciculus: A reexamination of the Wernicke-Geschwind model. Brain Lang. 1999;70(1):1–12. 10.1006/brln.1999.2135 [DOI] [PubMed] [Google Scholar]
- Andics A, Gácsi M, Faragó T, et al. : Voice-sensitive regions in the dog and human brain are revealed by comparative fMRI. Curr Biol. 2014;24(5):574–578. [DOI] [PubMed] [Google Scholar]
- Andics A, McQueen JM, Petersson KM, et al. : Neural mechanisms for voice recognition. Neuroimage. 2010;52(4):1528–1540. 10.1016/j.neuroimage.2010.05.048 [DOI] [PubMed] [Google Scholar]
- Anourova I, Nikouline VV, Ilmoniemi RJ, et al. : Evidence for dissociation of spatial and nonspatial auditory information processing. Neuroimage. 2001;14(6):1268–1277. 10.1006/nimg.2001.0903 [DOI] [PubMed] [Google Scholar]
- Arbib MA: From grasp to language: embodied concepts and the challenge of abstraction. J Physiol Paris. 2008;102(1–3):4–20. 10.1016/j.jphysparis.2008.03.001 [DOI] [PubMed] [Google Scholar]
- Arcadi AC: Vocal responsiveness in male wild chimpanzees: implications for the evolution of language. J Hum Evol. 2000;39(2):205–223. 10.1006/jhev.2000.0415 [DOI] [PubMed] [Google Scholar]
- Axer H, von Keyserlingk AG, Berks G, et al. : Supra- and infrasylvian conduction aphasia. Brain Lang. 2001;76(3):317–331. 10.1006/brln.2000.2425 [DOI] [PubMed] [Google Scholar]
- Baddeley AD, Hitch GJ: Working memory.In The Psychology of Learning and Motivation.Bower Ga, Ed.1974;8:47–90. 10.1016/S0079-7421(08)60452-1 [DOI] [Google Scholar]
- Baldo JV, Klostermann EC, Dronkers NF: It’s either a cook or a baker: patients with conduction aphasia get the gist but lose the trace. Brain Lang. 2008;105(2):134–140. 10.1016/j.bandl.2007.12.007 [DOI] [PubMed] [Google Scholar]
- Barrett DJ, Hall DA: Response preferences for “what” and “where” in human non-primary auditory cortex. Neuroimage. 2006;32(2):968–977. 10.1016/j.neuroimage.2006.03.050 [DOI] [PubMed] [Google Scholar]
- Bartha L, Benke T: Acute conduction aphasia: an analysis of 20 cases. Brain Lang. 2003;85(1):93–108. 10.1016/S0093-934X(02)00502-3 [DOI] [PubMed] [Google Scholar]
- Baumgart F, Gaschler-Markefski B, Woldorff MG, et al. : A movement-sensitive area in auditory cortex. Nature. 1999;400(6746):724–726. 10.1038/23390 [DOI] [PubMed] [Google Scholar]
- Belin P, Zatorre RJ: Adaptation to speaker's voice in right anterior temporal lobe. Neuroreport. 2003;14(16):2105–2109. 10.1097/00001756-200311140-00019 [DOI] [PubMed] [Google Scholar]
- Belton E, Salmond CH, Watkins KE, et al. : Bilateral brain abnormalities associated with dominantly inherited verbal and orofacial dyspraxia. Hum Brain Mapp. 2003;18(3):194–200. 10.1002/hbm.10093 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bendor D, Wang X: Cortical representations of pitch in monkeys and humans. Curr Opin Neurobiol. 2006;16(4):391–399. 10.1016/j.conb.2006.07.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Benson DA, Hienz RD, Goldstein MH, Jr: Single-unit activity in the auditory cortex of monkeys actively localizing sound sources: spatial tuning and behavioral dependency. Brain Res. 1981;219(2):249–267. 10.1016/0006-8993(81)90290-0 [DOI] [PubMed] [Google Scholar]
- Benson RR, Whalen DH, Richardson M, et al. : Parametrically dissociating speech and nonspeech perception in the brain using fMRI. Brain Lang. 2001;78(3):364–396. 10.1006/brln.2001.2484 [DOI] [PubMed] [Google Scholar]
- Biben M: Allomaternal vocal behavior in squirrel monkeys. Dev Psychobiol. 1992;25(2):79–92. 10.1002/dev.420250202 [DOI] [PubMed] [Google Scholar]
- Biben M, Symmes D, Masataka N: Temporal and structural analysis of affiliative vocal exchanges in squirrel monkeys ( Saimiri sciureus). Behaviour. 1986;98(1):259–273. 10.1163/156853986X00991 [DOI] [Google Scholar]
- Biben M, Symmes D, Bernhards D: Contour variables in vocal communication between squirrel monkey mothers and infants. Dev Psychobiol. 1989;22(6):617–631. 10.1002/dev.420220607 [DOI] [PubMed] [Google Scholar]
- Binder JR, Liebenthal E, Possing ET, et al. : Neural correlates of sensory and decision processes in auditory object identification. Nat Neurosci. 2004;7(3):295–301. 10.1038/nn1198 [DOI] [PubMed] [Google Scholar]
- Binder JR, Desai RH, Graves WW, et al. : Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cereb Cortex. 2009;19(12):2767–2796. 10.1093/cercor/bhp055 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Blake J: Gestural communication in the great apes. In The Evolution of Thought: Evolutionary Origins of Great Ape Intelligence Cambridge University Press.2004;61–75. 10.1017/CBO9780511542299.007 [DOI] [Google Scholar]
- Boatman D, Gordon B, Hart J, et al. : Transcortical sensory aphasia: revisited and revised. Brain. 2000;123(Pt 8):1634–1642. 10.1093/brain/123.8.1634 [DOI] [PubMed] [Google Scholar]
- Brunetti M, Belardinelli P, Caulo M, et al. : Human brain activation during passive listening to sounds from different locations: an fMRI and MEG study. Hum Brain Mapp. 2005;26(4):251–261. 10.1002/hbm.20164 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Buchsbaum BR, Hickok G, Humphries C: Role of left posterior superior temporal gyrus in phonological processing for speech perception and production. Cogn Sci. 2001;25(5):663–678. 10.1207/s15516709cog2505_2 [DOI] [Google Scholar]
- Buchsbaum BR, Olsen RK, Koch P, et al. : Human dorsal and ventral auditory streams subserve rehearsal-based and echoic processes during verbal working memory. Neuron. 2005;48(4):687–697. 10.1016/j.neuron.2005.09.029 [DOI] [PubMed] [Google Scholar]
- Buchsbaum BR, Baldo J, Okada K, et al. : Conduction aphasia, sensory-motor integration, and phonological short-term memory - an aggregate analysis of lesion and fMRI data. Brain Lang. 2011;119(3):119–128. 10.1016/j.bandl.2010.12.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Butler AB, Hodos W: Comparative vertebrate neuroanatomy: Evolution and adaptation, 1–740.Wiley & Sons.2005; p.505,657–659. 10.1002/0471733849 [DOI] [Google Scholar]
- Carlson KJ, Stout D, Jashashvili T, et al. : The endocast of MH1, Australopithecus sediba. Science. 2011;333(6048):1402–1407. 10.1126/science.1203922 [DOI] [PubMed] [Google Scholar]
- Catani M, Jones DK, ffytche DH: Perisylvian language networks of the human brain. Ann Neurol. 2004;57(1):8–16. 10.1002/ana.20319 [DOI] [PubMed] [Google Scholar]
- Chang EF, Edwards E, Nagarajan SS, et al. : Cortical spatio-temporal dynamics underlying phonological target detection in humans. J Cogn Neurosci. 2011;23(6):1437–1446. 10.1162/jocn.2010.21466 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cheney DL, Seyfarth RM: Vocal recognition in free-ranging vervet monkeys. Anim Behav. 1980;28(2):362–367. 10.1016/S0003-3472(80)80044-3 [DOI] [Google Scholar]
- Clarke S, Adriani M, Bellmann A: Distinct short-term memory systems for sound content and sound localization. Neuroreport. 1998;9(15):3433–3437. 10.1097/00001756-199810260-00018 [DOI] [PubMed] [Google Scholar]
- Clarke S, Bellmann A, Meuli RA, et al. : Auditory agnosia and auditory spatial deficits following left hemispheric lesions: evidence for distinct processing pathways. Neuropsychologia. 2000;38(6):797–807. 10.1016/S0028-3932(99)00141-4 [DOI] [PubMed] [Google Scholar]
- Cohen YE, Russ BE, Gifford GW, 3rd, et al. : Selectivity for the spatial and nonspatial attributes of auditory stimuli in the ventrolateral prefrontal cortex. J Neurosci. 2004;24(50):11307–11316. 10.1523/JNEUROSCI.3935-04.2004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Colombo M, Graziano M: Effects of auditory and visual interference on auditory-visual delayed matching to sample in monkeys ( Macaca fascicularis). Behav Neurosci. 1994;108(3):636–639. 10.1037/0735-7044.108.3.636 [DOI] [PubMed] [Google Scholar]
- Conard R: An association between memory errors and errors due to acoustic masking of speech. Nature. 1962;193:1314–1315. 10.1038/1931314a0 [DOI] [PubMed] [Google Scholar]
- Corballis MC: Mirror neurons and the evolution of language. Brain Lang. 2010;112(1):25–35. 10.1016/j.bandl.2009.02.002 [DOI] [PubMed] [Google Scholar]
- Coudé G, Ferrari PF, Rodà F, et al. : Neurons controlling voluntary vocalization in the macaque ventral premotor cortex. PLoS One. 2011;6(11):e26822. 10.1371/journal.pone.0026822 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Creutzfeldt O, Ojemann G, Lettich E: Neuronal activity in the human lateral temporal lobe. I. Responses to speech. Exp Brain Res. 1989;77(3):451–475. 10.1007/BF00249600 [DOI] [PubMed] [Google Scholar]
- Cusick CG, Seltzer B, Cola M, et al. : Chemoarchitectonics and corticocortical terminations within the superior temporal sulcus of the rhesus monkey: evidence for subdivisions of superior temporal polysensory cortex. J Comp Neurol. 1995;360(3):513–535. 10.1002/cne.903600312 [DOI] [PubMed] [Google Scholar]
- Da Costa S, Van Der Zwaag W, Marques JP, et al. : Human primary auditory cortex follows the shape of Heschl’s gyrus. J Neurosci. 2011;31(40):14067–14075. 10.1523/JNEUROSCI.2000-11.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Darwin C: The Descent of Man and Selection in Relation to Sex. Appleton. 1871. 10.5962/bhl.title.24784 [DOI] [Google Scholar]
- Davis MH, Johnsrude IS: Hierarchical processing in spoken language comprehension. J Neurosci. 2003;23(8):3423–3431. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Deacon TW: Cortical connections of the inferior arcuate sulcus cortex in the macaque brain. Brain Res. 1992;573(1):8–26. 10.1016/0006-8993(92)90109-M [DOI] [PubMed] [Google Scholar]
- De Santis L, Clarke S, Murray MM: Automatic and intrinsic auditory “what” and “where” processing in humans revealed by electrical neuroimaging. Cereb Cortex. 2006;17(1):9–17. 10.1093/cercor/bhj119 [DOI] [PubMed] [Google Scholar]
- Deutsch SE: Prediction of site of lesion from speech apraxic error patterns.In apraxia of speech: physiology, acoustics, linguistics, management College Hill Pr.1984;113–134. [Google Scholar]
- Donald M: Imitation and Mimesis. In Perspectives on Imitation: Mechanisms of imitation and imitation in animals by Hurley and Chater MIT Press.2005;284–300. Reference Source [Google Scholar]
- Dronkers NF: The pursuit of brain-language relationships. Brain Lang. 2000;71(1):59–61. 10.1006/brln.1999.2212 [DOI] [PubMed] [Google Scholar]
- Dronkers NF, Redfern BB, Knight RT: The neural architecture of language disorders. In M. S. Gazzaniga (Ed.), The Cognitive Neurosciences Cambridge MA MIT Press.1999;949–958. Reference Source [Google Scholar]
- Dronkers NF, Wilkins DP, Van Valin RD, Jr, et al. : Lesion analysis of the brain areas involved in language comprehension. Cognition. 2004;92(1–2):145–177. 10.1016/j.cognition.2003.11.002 [DOI] [PubMed] [Google Scholar]
- Duffau H: The anatomo-functional connectivity of language revisited. New insights provided by electrostimulation and tractography. Neuropsychologia. 2008;46(4):927–934. 10.1016/j.neuropsychologia.2007.10.025 [DOI] [PubMed] [Google Scholar]
- Edmonds L, Marquardt T: Syllable use in apraxia of speech: Preliminary findings. Aphasiology. 2004;18(12):1121–1134. 10.1080/02687030444000561 [DOI] [Google Scholar]
- Efron R, Crandall PH: Central auditory processing. II. Effects of anterior temporal lobectomy. Brain Lang. 1983;19(2):237–253. 10.1016/0093-934X(83)90068-8 [DOI] [PubMed] [Google Scholar]
- Falk D: Prelinguistic evolution in early hominins: whence motherese? Behav Brain Sci. 2004;27(4):491–503. 10.1017/S0140525X04000111 [DOI] [PubMed] [Google Scholar]
- Formisano E, De Martino F, Bonte M, et al. : “Who” is saying “what”? Brain-based decoding of human voice and speech. Science. 2008;322(5903):970–973. 10.1126/science.1164318 [DOI] [PubMed] [Google Scholar]
- Frey S, Campbell JS, Pike GB, et al. : Dissociating the human language pathways with high angular resolution diffusion fiber tractography. J Neurosci. 2008;28(45):11435–11444. 10.1523/JNEUROSCI.2388-08.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fridriksson J, Kjartansson O, Morgan PS, et al. : Impaired speech repetition and left parietal lobe damage. J Neurosci. 2010;30(33):11057–11061. 10.1523/JNEUROSCI.1120-10.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fritz J, Mishkin M, Saunders RC: In search of an auditory engram. Proc Natl Acad Sci U S A. 2005;102(26):9359–9364. 10.1073/pnas.0503998102 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Geiser E, Zaehle T, Jancke L, et al. : The neural correlate of speech rhythm as evidenced by metrical speech processing. J Cogn Neurosci. 2008;20(3):541–552. 10.1162/jocn.2008.20029 [DOI] [PubMed] [Google Scholar]
- Gelfand JR, Bookheimer SY: Dissociating neural mechanisms of temporal sequencing and processing phonemes. Neuron. 2003;38(5):831–842. 10.1016/S0896-6273(03)00285-X [DOI] [PubMed] [Google Scholar]
- Gemba H, Kyuhou S, Matsuzaki R, et al. : Cortical field potentials associated with audio-initiated vocalization in monkeys. Neurosci Lett. 1999;272(1):49–52. 10.1016/S0304-3940(99)00570-4 [DOI] [PubMed] [Google Scholar]
- Gentilucci M, Corballis MC: From manual gesture to speech: a gradual transition. Neurosci Biobehav Rev. 2006;30(7):949–960. 10.1016/j.neubiorev.2006.02.004 [DOI] [PubMed] [Google Scholar]
- Geschwind N: Disconnexion syndromes in animals and man. I. Brain. 1965;88(2):237–294. 10.1093/brain/88.2.237 [DOI] [PubMed] [Google Scholar]
- Gibson KR: Language or protolanguage? A review of the ape language literature.In The Oxford Handbook of Language Evolution Oxford University Press USA.2011;46–58. 10.1093/oxfordhb/9780199541119.013.0003 [DOI] [Google Scholar]
- Gifford GW, 3rd, Cohen YE: Spatial and non-spatial auditory processing in the lateral intraparietal area. Exp Brain Res. 2005;162(4):509–512. 10.1007/s00221-005-2220-2 [DOI] [PubMed] [Google Scholar]
- Gil-da-Costa R, Martin A, Lopes MA, et al. : Species-specific calls activate homologs of Broca’s and Wernicke’s areas in the macaque. Nat Neurosci. 2006;9(8):1064–1070. 10.1038/nn1741 [DOI] [PubMed] [Google Scholar]
- Giraud AL, Price CJ: The constraints functional neuroimaging places on classical models of auditory word processing. J Cogn Neurosci. 2001;13(6):754–765. 10.1162/08989290152541421 [DOI] [PubMed] [Google Scholar]
- Ghazanfar AA, Maier JX, Hoffman KL, et al. : Multisensory integration of dynamic faces and voices in rhesus monkey auditory cortex. J Neurosci. 2005;25(20):5004–5012. 10.1523/JNEUROSCI.0799-05.2005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goodall J: The chimpanzees of Gombe: patterns of behavior.Belknap Press1986. Reference Source [Google Scholar]
- Goodale MA, Milner AD: Separate visual pathways for perception and action. Trends Neurosci. 1992;15(1):20–25. 10.1016/0166-2236(92)90344-8 [DOI] [PubMed] [Google Scholar]
- Gorno-Tempini ML, Brambati SM, Ginex V, et al. : The logopenic/phonological variant of primary progressive aphasia. Neurology. 2008;71(16):1227–1234. 10.1212/01.wnl.0000320506.79811.da [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gottlieb Y, Vaadia E, Abeles M: Single unit activity in the auditory cortex of a monkey performing a short term memory task. Exp Brain Res. 1989;74(1):139–148. 10.1007/BF00248287 [DOI] [PubMed] [Google Scholar]
- Gourévitch B, Le Bouquin Jeannès R, Faucon G, et al. : Temporal envelope processing in the human auditory cortex: response and interconnections of auditory cortical areas. Hear Res. 2008;237(1–2):1–18. 10.1016/j.heares.2007.12.003 [DOI] [PubMed] [Google Scholar]
- Gow DW, Jr: The cortical organization of lexical knowledge: a dual lexicon model of spoken language processing. Brain Lang. 2012;121(3):273–288. 10.1016/j.bandl.2012.03.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grandin T: Thinking in Pictures Expanded Edition: My Life with Autism. Random House LLC.2008. Reference Source [Google Scholar]
- Graves WW, Grabowski TJ, Mehta S, et al. : The left posterior superior temporal gyrus participates specifically in accessing lexical phonology. J Cogn Neurosci. 2008;20(9):1698–1710. 10.1162/jocn.2008.20113 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Griffiths TD, Rees A, Witton C, et al. : Evidence for a sound movement area in the human cerebral cortex. Nature. 1996;383(6599):425–427. 10.1038/383425a0 [DOI] [PubMed] [Google Scholar]
- Grunewald A, Linden JF, Andersen RA: Responses to auditory stimuli in macaque lateral intraparietal area I. Effects of training. J Neurophysiol. 1999;82(1):330–342. [DOI] [PubMed] [Google Scholar]
- Guéguin M, Le Bouquin-Jeannès R, Faucon G, et al. : Evidence of functional connectivity between auditory cortical areas revealed by amplitude modulation sound processing. Cereb Cortex. 2007;17(2):304–313. 10.1093/cercor/bhj148 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hage SR, Jürgens U: Localization of a vocal pattern generator in the pontine brainstem of the squirrel monkey. Eur J Neurosci. 2006;23(3):840–844. 10.1111/j.1460-9568.2006.04595.x [DOI] [PubMed] [Google Scholar]
- Hamberger MJ, McClelland S, 3rd, McKhann GM, 2nd, et al. : Distribution of auditory and visual naming sites in nonlesional temporal lobe epilepsy patients and patients with space-occupying temporal lobe lesions. Epilepsia. 2007;48(3):531–538. 10.1111/j.1528-1167.2006.00955.x [DOI] [PubMed] [Google Scholar]
- Hannig S, Jürgens U: Projections of the ventrolateral pontine vocalization area in the squirrel monkey. Exp Brain Res. 2006;169(1):92–105. 10.1007/s00221-005-0128-5 [DOI] [PubMed] [Google Scholar]
- Hart HC, Palmer AR, Hall DA: Different areas of human non-primary auditory cortex are activated by sounds with spatial and nonspatial properties. Hum Brain Mapp. 2004;21(3):178–190. 10.1002/hbm.10156 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hayes KJ, Hayes C: Imitation in a home-raised chimpanzee. J Comp Physiol Psychol. 1952;45(5):450–459. [DOI] [PubMed] [Google Scholar]
- Heffner HE, Heffner RS: Temporal lobe lesions and perception of species-specific vocalizations by macaques. Science. 1984;226(4670):75–76. 10.1126/science.6474192 [DOI] [PubMed] [Google Scholar]
- Heimbauer LA, Beran MJ, Owren MJ: A chimpanzee recognizes synthetic speech with significantly reduced acoustic cues to phonetic content. Curr Biol. 2011;21(14):1210–1214. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hewes GW: Primate communication and the gestural origin of language. Curr Anthropol. 1973;14(1/2):5–24. Reference Source [Google Scholar]
- Hickok G, Poeppel D: The cortical organization of speech processing. Nat Rev Neurosci. 2007;8(5):393–402. 10.1038/nrn2113 [DOI] [PubMed] [Google Scholar]
- Hickok G, Buchsbaum B, Humphries C, et al. : Auditory-motor interaction revealed by fMRI: speech, music, and working memory in area Spt. J Cogn Neurosci. 2003;15(5):673–682. [DOI] [PubMed] [Google Scholar]
- Hickok G, Okada K, Barr W, et al. : Bilateral capacity for speech sound processing in auditory comprehension: evidence from Wada procedures. Brain Lang. 2008;107(3):179–184. 10.1016/j.bandl.2008.09.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hihara S, Yamada H, Iriki A, et al. : Spontaneous vocal differentiation of coo-calls for tools and food in Japanese monkeys. Neurosci Res. 2003;45(4):383–389. 10.1016/S0168-0102(03)00011-7 [DOI] [PubMed] [Google Scholar]
- Hillis AE, Work M, Barker PB, et al. : Re-examining the brain regions crucial for orchestrating speech articulation. Brain. 2004;127(7):1479–1487. 10.1093/brain/awh172 [DOI] [PubMed] [Google Scholar]
- Holstege G: Anatomical study of the final common pathway for vocalization in the cat. J Comp Neurol. 1989;284(2):242–252. 10.1002/cne.902840208 [DOI] [PubMed] [Google Scholar]
- Holstege G, Kerstens L, Moes MC, et al. : Evidence for a periaqueductal gray-nucleus retroambiguus-spinal cord pathway in the rat. Neuroscience. 1997;80(2):587–598. 10.1016/S0306-4522(97)00061-4 [DOI] [PubMed] [Google Scholar]
- Hopkins WD, Taglialatela JP, Leavens DA: Chimpanzees Differentially Produce Novel Vocalizations to Capture the Attention of a Human. Anim Behav. 2007;73(2):281–286. 10.1016/j.anbehav.2006.08.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Humphries C, Liebenthal E, Binder JR: Tonotopic organization of human auditory cortex. Neuroimage. 2010;50(3):1202–1211. 10.1016/j.neuroimage.2010.01.046 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ischebeck A, Indefrey P, Usui N, et al. : Reading in a regular orthography: an fMRI study investigating the role of visual familiarity. J Cogn Neurosci. 2004;16(5):727–741. 10.1162/089892904970708 [DOI] [PubMed] [Google Scholar]
- Jardri R, Houfflin-Debarge V, Delion P, et al. : Assessing fetal response to maternal speech using a noninvasive functional brain imaging technique. Int J Dev Neurosci. 2012;30(2):159–161. 10.1016/j.ijdevneu.2011.11.002 [DOI] [PubMed] [Google Scholar]
- Joly O, Pallier C, Ramus F, et al. : Processing of vocalizations in humans and monkeys: a comparative fMRI study. Neuroimage. 2012;62(3):1376–1389. 10.1016/j.neuroimage.2012.05.070 [DOI] [PubMed] [Google Scholar]
- Jordania J: Who Asked the First Question? The Origins of Human Choral Singing, Intelligence, Language and Speech. Tbilisi: Logos. 2006;334–338. Reference Source [Google Scholar]
- Josephs KA, Duffy JR, Strand EA, et al. : Clinicopathological and imaging correlates of progressive aphasia and apraxia of speech. Brain. 2006;129(Pt 6):1385–1398. 10.1093/brain/awl078 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jürgens U, Alipour M: A comparative study on the cortico-hypoglossal connections in primates, using biotin dextranamine. Neurosci Lett. 2002;328(3),245–248. 10.1016/S0304-3940(02)00525-6 [DOI] [PubMed] [Google Scholar]
- Jürgens U, Ploog D: Cerebral representation of vocalization in the squirrel monkey. Exp Brain Res. 1970;10(5):532–554. 10.1007/BF00234269 [DOI] [PubMed] [Google Scholar]
- Just MA, Cherkassky VL, Keller TA, et al. : Functional and anatomical cortical underconnectivity in autism: evidence from an FMRI study of an executive function task and corpus callosum morphometry. Cereb Cortex. 2007;17(4):951–961. 10.1093/cercor/bhl006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kaas JH, Hackett TA: Subdivisions of auditory cortex and processing streams in primates. Proc Natl Acad Sci U S A. 2000;97(22):11793–11799. 10.1073/pnas.97.22.11793 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kaiser J, Ripper B, Birbaumer N, et al. : Dynamics of gamma-band activity in human magnetoencephalogram during auditory pattern working memory. Neuroimage. 2003;20(2):816–827. 10.1016/S1053-8119(03)00350-1 [DOI] [PubMed] [Google Scholar]
- Kalan AK, Mundry R, Boesch C: Wild chimpanzees modify food call structure with respect to tree size for a particular fruit species. Anim Behav. 2015;101:1–9. 10.1016/j.anbehav.2014.12.011 [DOI] [Google Scholar]
- Kaminski J, Call J, Fischer J: Word learning in a domestic dog: evidence for “fast mapping”. Science. 2004;304(5677):1682–1683. 10.1126/science.1097859 [DOI] [PubMed] [Google Scholar]
- Karbe H, Herholz K, Weber-Luxenburger G, et al. : Cerebral networks and functional brain asymmetry: evidence from regional metabolic changes during word repetition. Brain Lang. 1998;63(1):108–121. 10.1006/brln.1997.1937 [DOI] [PubMed] [Google Scholar]
- Kimbel WH, Walter RC, Johanson DC, et al. : Late Pliocene Homo and Oldowan Tools from the Hadar Formation (Kada Hadar Member), Ethiopia. J Hum Evol. 1996;31(6):549–561. 10.1006/jhev.1996.0079 [DOI] [Google Scholar]
- Kimura D, Watson N: The relation between oral movement control and speech. Brain Lang. 1989;37(4):565–590. [DOI] [PubMed] [Google Scholar]
- Kayser C, Petkov CI, Logothetis NK: Multisensory interactions in primate auditory cortex: fMRI and electrophysiology. Hear Res. 2009;258(1–2):80–88. 10.1016/j.heares.2009.02.011 [DOI] [PubMed] [Google Scholar]
- Koda H, Nishimura T, Tokuda IT, et al. : Soprano singing in gibbons. Am J Phys Anthropol. 2012;149(3):347–355. 10.1002/ajpa.22124 [DOI] [PubMed] [Google Scholar]
- Koda H, Oyakawa C, Kato A, et al. : Experimental evidence for the volitional control of vocal production in an immature gibbon. Behaviour. 2007;144(6):681–692. 10.1163/156853907781347817 [DOI] [Google Scholar]
- Kosmal A, Malinowska M, Kowalska DM: Thalamic and amygdaloid connections of the auditory association cortex of the superior temporal gyrus in rhesus monkey ( Macaca mulatta). Acta Neurobiol Exp (Wars). 1997;57(3):165–188. [DOI] [PubMed] [Google Scholar]
- Krumbholz K, Schönwiesner M, Rübsamen R, et al. : Hierarchical processing of sound location and motion in the human brainstem and planum temporale. Eur J Neurosci. 2005;21(1):230–238. 10.1111/j.1460-9568.2004.03836.x [DOI] [PubMed] [Google Scholar]
- Lachaux JP, Jerbi K, Bertrand O, et al. : A blueprint for real-time functional mapping via human intracranial recordings. PLoS One. 2007;2(10):e1094. 10.1371/journal.pone.0001094 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lameira AR, Hardus ME, Bartlett AM, et al. : Speech-like rhythm in a voiced and voiceless orangutan call. PLoS One. 2015;10(1):e116136. 10.1371/journal.pone.0116136 [DOI] [PMC free article] [PubMed] [Google Scholar]
- la Mothe de LA, Blumell S, Kajikawa Y, et al. : Cortical connections of the auditory cortex in marmoset monkeys: Core and medial belt regions. J Comp Neurol. 2006;496(1):27–71. 10.1002/cne.20923 [DOI] [PubMed] [Google Scholar]
- la Mothe de LA, Blumell S, Kajikawa Y, et al. : Cortical connections of auditory cortex in marmoset monkeys: lateral belt and parabelt regions. Anat Rec (Hoboken). 2012;295(5):800–821. 10.1002/ar.22451 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Langers DRM, van Dijk P: Mapping the tonotopic organization in human auditory cortex with minimally salient acoustic stimulation. Cereb Cortex. 2012;22(9):2024–2038. 10.1093/cercor/bhr282 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Leaver AM, Rauschecker JP: Cortical representation of natural complex sounds: effects of acoustic features and auditory object category. J Neurosci. 2010;30(22):7604–7612. 10.1523/JNEUROSCI.0296-10.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Leff AP, Schofield TM, Crinion JT, et al. : The left superior temporal gyrus is a shared substrate for auditory short-term memory and speech comprehension: evidence from 210 patients with stroke. Brain. 2009;132(Pt 12):3401–3410. 10.1093/brain/awp273 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lewis JW, Beauchamp MS, DeYoe EA: A comparison of visual and auditory motion processing in human cerebral cortex. Cereb Cortex. 2000;10(9):873–888. 10.1093/cercor/10.9.873 [DOI] [PubMed] [Google Scholar]
- Lewis JW, Van Essen DC: Corticocortical connections of visual, sensorimotor, and multimodal processing areas in the parietal lobe of the macaque monkey. J Comp Neurol. 2000;428(1):112–137. [DOI] [PubMed] [Google Scholar]
- Lewis JW, Phinney RE, Brefczynski-Lewis JA, et al. : Lefties get it “right” when hearing tool sounds. J Cogn Neurosci. 2006;18(8):1314–1330. 10.1162/jocn.2006.18.8.1314 [DOI] [PubMed] [Google Scholar]
- Lichtheim L: On aphasia. Brain. 1885;7:433–485. 10.1093/brain/awl134 [DOI] [Google Scholar]
- Liebenthal E, Binder JR, Spitzer SM: Neural substrates of phonemic perception. Cereb Cortex. 2005;15(10):1621–1631. 10.1093/cercor/bhi040 [DOI] [PubMed] [Google Scholar]
- Linden JF, Grunewald A, Andersen RA: Responses to auditory stimuli in macaque lateral intraparietal area II. Behavioral modulation. J Neurophysiol. 1999;82(1):343–358. [DOI] [PubMed] [Google Scholar]
- Lüthe L, Häusler U, Jürgens U: Neuronal activity in the medulla oblongata during vocalization. A single-unit recording study in the squirrel monkey. Behav Brain Res. 2000;116(2):197–210. 10.1016/S0166-4328(00)00272-2 [DOI] [PubMed] [Google Scholar]
- Lutzenberger W, Ripper B, Busse L, et al. : Dynamics of gamma-band activity during an audiospatial working memory task in humans. J Neurosci. 2002;22(13):5630–5638. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Maeder PP, Meuli RA, Adriani M, et al. : Distinct pathways involved in sound recognition and localization: a human fMRI study. Neuroimage. 2001;14(4):802–816. 10.1006/nimg.2001.0888 [DOI] [PubMed] [Google Scholar]
- Makris N, Papadimitriou GM, Kaiser JR, et al. : Delineation of the middle longitudinal fascicle in humans: a quantitative, in vivo, DT-MRI study. Cereb Cortex. 2009;19(4):777–785. 10.1093/cercor/bhn124 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Marler P, Hobbett L: Individuality in a long-range vocalization of wild chimpanzees. Z Tierpsychol. 1975;38(1):37–109. 10.1111/j.1439-0310.1975.tb01994.x [DOI] [PubMed] [Google Scholar]
- Martinkauppi S, Rämä P, Aronen HJ, et al. : Working memory of auditory localization. Cereb Cortex. 2000;10(9):889–898. 10.1093/cercor/10.9.889 [DOI] [PubMed] [Google Scholar]
- Masataka N: The origins of language and the evolution of music: A comparative perspective. Phys Life Rev. 2009;6(1):11–22. 10.1016/j.plrev.2008.08.003 [DOI] [PubMed] [Google Scholar]
- Matsumoto R, Imamura H, Inouchi M, et al. : Left anterior temporal cortex actively engages in speech perception: A direct cortical stimulation study. Neuropsychologia. 2011;49(5):1350–1354. 10.1016/j.neuropsychologia.2011.01.023 [DOI] [PubMed] [Google Scholar]
- Matsuzawa T: Evolutionary Origins of Mother-Infant Relationship. In Cognitive development in chimpanzees.Tokyo: Springer-Verlag.2006;127–141. 10.1007/4-431-30248-4_8 [DOI] [Google Scholar]
- Mazzoni P, Bracewell RM, Barash S, et al. : Spatially tuned auditory responses in area LIP of macaques performing delayed memory saccades to acoustic targets. J Neurophysiol. 1996;75(3):1233–1241. [DOI] [PubMed] [Google Scholar]
- Menjot de Champfleur N, Lima Maldonado I, Moritz-Gasser S, et al. : Middle longitudinal fasciculus delineation within language pathways: a diffusion tensor imaging study in human. Eur J Radiol. 2013;82(1):151–157. 10.1016/j.ejrad.2012.05.034 [DOI] [PubMed] [Google Scholar]
- Meyer M, Steinhauer K, Alter K, et al. : Brain activity varies with modulation of dynamic pitch variance in sentence melody. Brain Lang. 2004;89(2):277–289. 10.1016/S0093-934X(03)00350-X [DOI] [PubMed] [Google Scholar]
- Meyer J: Typology and acoustic strategies of whistled languages: Phonetic comparison and perceptual cues of whistled vowels. J Int Phon Assoc. 2008;38(01):. 10.1017/S0025100308003277 [DOI] [Google Scholar]
- Miller CT, DiMauro A, Pistorio A, et al. : Vocalization Induced CFos Expression in Marmoset Cortex. Front Integr Neurosci. 2010;4:128. 10.3389/fnint.2010.00128 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Miller LM, Recanzone GH: Populations of auditory cortical neurons can accurately encode acoustic space across stimulus intensity. Proc Natl Acad Sci U S A. 2009;106(14):5931–5935. 10.1073/pnas.0901023106 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mitani JC, Nishida T: Contexts and social correlates of long-distance calling by male chimpanzees. Anim Behav. 1993;45(4):735–746. Reference Source [Google Scholar]
- Mithen S: The Singing Neanderthals: the Origins of Music, Language, Mind and Body. Harvard University Press.2006. Reference Source [Google Scholar]
- Morel A, Garraghty PE, Kaas JH: Tonotopic organization, architectonic fields, and connections of auditory cortex in macaque monkeys. J Comp Neurol. 1993;335(3):437–459. 10.1002/cne.903350312 [DOI] [PubMed] [Google Scholar]
- Mullette-Gillman OA, Cohen YE, Groh JM: Eye-centered, head-centered, and complex coding of visual and auditory targets in the intraparietal sulcus. J Neurophysiol. 2005;94(4):2331–2352. 10.1152/jn.00021.2005 [DOI] [PubMed] [Google Scholar]
- Munoz M, Mishkin M, Saunders RC: Resection of the medial temporal lobe disconnects the rostral superior temporal gyrus from some of its projection targets in the frontal lobe and thalamus. Cereb Cortex. 2009;19(9):2114–2130. 10.1093/cercor/bhn236 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nakamura K, Kawashima R, Sugiura M, et al. : Neural substrates for recognition of familiar voices: a PET study. Neuropsychologia. 2001;39(10):1047–1054. 10.1016/S0028-3932(01)00037-9 [DOI] [PubMed] [Google Scholar]
- Narain C, Scott SK, Wise RJ, et al. : Defining a left-lateralized response specific to intelligible speech using fMRI. Cereb Cortex. 2003;13(12):1362–1368. 10.1093/cercor/bhg083 [DOI] [PubMed] [Google Scholar]
- Noppeney U, Patterson K, Tyler LK, et al. : Temporal lobe lesions and semantic impairment: a comparison of herpes simplex virus encephalitis and semantic dementia. Brain. 2007;130(pt 4),1138–1147. 10.1093/brain/awl344 [DOI] [PubMed] [Google Scholar]
- Obleser J, Boecker H, Drzezga A, et al. : Vowel sound extraction in anterior superior temporal cortex. Hum Brain Mapp. 2006;27(7):562–571. 10.1002/hbm.20201 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Obleser J, Zimmermann J, Van Meter J, et al. : Multiple stages of auditory Speech perception reflected in event-related FMRI. Cereb Cortex. 2007;17(10):2251–2257. 10.1093/cercor/bhl133 [DOI] [PubMed] [Google Scholar]
- Odell K, McNeil MR, Rosenbek JC, et al. : Perceptual characteristics of vowel and prosody production in apraxic, aphasic, and dysarthric speakers. J Speech Hear Res. 1991;34(1):67–80. 10.1044/jshr.3401.67 [DOI] [PubMed] [Google Scholar]
- Odell K, Shriberg DL: Prosody-voice characteristics of children and adults with apraxia of speech. Clin Linguist Phon. 2001;15:275–307. Reference Source [Google Scholar]
- Ojemann GA: Brain organization for language from the perspective of electrical stimulation mapping. Behav Brain Sci. 1983;6(2):189–206. 10.1017/S0140525X00015491 [DOI] [Google Scholar]
- Patterson K, Nestor PJ, Rogers TT: Where do you know what you know? The representation of semantic knowledge in the human brain. Nat Rev Neurosci. 2007;8(12):976–987. 10.1038/nrn2277 [DOI] [PubMed] [Google Scholar]
- Pavani F, Macaluso E, Warren JD, et al. : A common cortical substrate activated by horizontal and vertical sound movement in the human brain. Curr Biol. 2002;12(18):1584–1590. [DOI] [PubMed] [Google Scholar]
- Perrodin C, Kayser C, Logothetis NK, et al. : Voice cells in the primate temporal lobe. Curr Biol. 2011;21(16):1408–1415. 10.1016/j.cub.2011.07.028 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Petersen MR, Beecher MD, Zoloth SR, et al. : Neural lateralization of species-specific vocalizations by Japanese macaques ( Macaca fuscata). Science. 1978;202(4365):324–327. 10.1126/science.99817 [DOI] [PubMed] [Google Scholar]
- Petkov CI, Kayser C, Augath M, et al. : Functional imaging reveals numerous fields in the monkey auditory cortex. PLoS Biol. 2006;4(7):e215. 10.1371/journal.pbio.0040215 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Petkov CI, Kayser C, Steudel T, et al. : A voice region in the monkey brain. Nat Neurosci. 2008;11(3):367–374. 10.1038/nn2043 [DOI] [PubMed] [Google Scholar]
- Pilley JW, Reid AK: Border collie comprehends object names as verbal referents. Behav Processes. 2011;86(2):184–195. 10.1016/j.beproc.2010.11.007 [DOI] [PubMed] [Google Scholar]
- Poeppel D, Emmorey K, Hickok G, et al. : Towards a new neurobiology of language. J Neurosci. 2012;32(41):14125–14131. 10.1523/JNEUROSCI.3244-12.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Poeppel D: Pure word deafness and the bilateral processing of the speech code. Cogn Sci. 2001;25(5):679–693. 10.1016/S0364-0213(01)00050-7 [DOI] [Google Scholar]
- Poremba A, Malloy M, Saunders RC, et al. : Species-specific calls evoke asymmetric activity in the monkey's temporal poles. Nature. 2004;427(6973):448–451. 10.1038/nature02268 [DOI] [PubMed] [Google Scholar]
- Premack D, Premack AJ: The Mind of an Ape. W. W. Norton. 1984. Reference Source [Google Scholar]
- Quigg M, Fountain NB: Conduction aphasia elicited by stimulation of the left posterior superior temporal gyrus. J Neurol Neurosurg Psychiatry. 1999;66(3):393–396. 10.1136/jnnp.66.3.393 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Quigg M, Geldmacher DS, Elias WJ: Conduction aphasia as a function of the dominant posterior perisylvian cortex. Report of two cases. J Neurosurg. 2006;104(5):845–848. 10.3171/jns.2006.104.5.845 [DOI] [PubMed] [Google Scholar]
- Rämä P, Courtney SM: Functional topography of working memory for face or voice identity. Neuroimage. 2005;24(1):224–234. 10.1016/j.neuroimage.2004.08.024 [DOI] [PubMed] [Google Scholar]
- Rauschecker JP, Scott SK: Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing. Nat Neurosci. 2009;12(6):718–724. 10.1038/nn.2331 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rauschecker JP, Tian B: Mechanisms and streams for processing of “what” and “where” in auditory cortex. Proc Natl Acad Sci U S A. 2000;97(22):11800–11806. 10.1073/pnas.97.22.11800 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rauschecker JP, Tian B, Hauser M: Processing of complex sounds in the macaque nonprimary auditory cortex. Science. 1995;268(5207):111–114. 10.1126/science.7701330 [DOI] [PubMed] [Google Scholar]
- Rauschecker JP, Tian B, Pons T, et al. : Serial and parallel processing in rhesus monkey auditory cortex. J Comp Neurol. 1997;382(1):89–103. [DOI] [PubMed] [Google Scholar]
- Recanzone GH: Representation of con-specific vocalizations in the core and belt areas of the auditory cortex in the alert macaque monkey. J Neurosci. 2008;28(49):13184–13193. 10.1523/JNEUROSCI.3619-08.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Remedios R, Logothetis NK, Kayser C: An auditory region in the primate insular cortex responding preferentially to vocal communication sounds. J Neurosci. 2009;29(4):1034–1045. 10.1523/JNEUROSCI.4089-08.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Remedios R, Logothetis NK, Kayser C: Monkey drumming reveals common networks for perceiving vocal and nonvocal communication sounds. Proc Natl Acad Sci U S A. 2009;106(42):18010–18015. 10.1073/pnas.0909756106 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rilling JK, Glasser MF, Jbabdi S, et al. : Continuity, divergence, and the evolution of brain language pathways. Front Evol Neurosci. 2012;3:11. 10.3389/fnevo.2011.00011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roberts AC, Tomic DL, Parkinson CH, et al. : Forebrain connectivity of the prefrontal cortex in the marmoset monkey ( Callithrix jacchus): an anterograde and retrograde tract-tracing study. J Comp Neurol. 2007;502(1):86–112. 10.1002/cne.21300 [DOI] [PubMed] [Google Scholar]
- Robinson BW: Vocalization evoked from forebrain in Macaca mulatta. Physiol Behav. 1967;2(4):345–354. 10.1016/0031-9384(67)90050-9 [DOI] [Google Scholar]
- Rohrer JD, Ridgway GR, Crutch SJ, et al. : Progressive logopenic/phonological aphasia: erosion of the language network. Neuroimage. 2010;49(1):984–993. 10.1016/j.neuroimage.2009.08.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rohrer JD, Sauter D, Scott S, et al. : Receptive prosody in nonfluent primary progressive aphasias. Cortex. 2012;48(3):308–316. 10.1016/j.cortex.2010.09.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roll P, Rudolf G, Pereira S, et al. : SRPX 2 mutations in disorders of language cortex and cognition. Hum Mol Genet. 2006;15(7):1195–1207. 10.1093/hmg/ddl035 [DOI] [PubMed] [Google Scholar]
- Roll P, Vernes SC, Bruneau N, et al. : Molecular networks implicated in speech-related disorders: FOXP 2 regulates the SRPX 2/uPAR complex. Hum Mol Genet. 2010;19(24):4848–4860. 10.1093/hmg/ddq415 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Romanski LM, Averbeck BB, Diltz M: Neural representation of vocalizations in the primate ventrolateral prefrontal cortex. J Neurophysiol. 2004;93(2):734–747. 10.1152/jn.00675.2004 [DOI] [PubMed] [Google Scholar]
- Romanski LM, Bates JF, Goldman-Rakic PS: Auditory belt and parabelt projections to the prefrontal cortex in the rhesus monkey. J Comp Neurol. 1999;403(2):141–157. [DOI] [PubMed] [Google Scholar]
- Russ BE, Ackelson AL, Baker AE, et al. : Coding of auditory-stimulus identity in the auditory non-spatial processing stream. J Neurophysiol. 2007;99(1):87–95. 10.1152/jn.01069.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Russo GS, Bruce CJ: Frontal eye field activity preceding aurally guided saccades. J Neurophysiol. 1994;71(3):1250–1253. [DOI] [PubMed] [Google Scholar]
- Sahyoun CP, Soulières I, Belliveau JW, et al. : Cognitive differences in pictorial reasoning between high-functioning autism and Asperger’s syndrome. J Autism Dev Disord. 2009;39(7):1014–1023. 10.1007/s10803-009-0712-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Saur D, Kreher BW, Schnell S, et al. : Ventral and dorsal pathways for language. Proc Natl Acad Sci U S A. 2008;105(46):18035–18040. 10.1073/pnas.0805234105 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scheich H, Baumgart F, Gaschler-Markefski B, et al. : Functional magnetic resonance imaging of a human auditory cortex area involved in foreground-background decomposition. Eur J Neurosci. 1998;10(2):803–809. 10.1046/j.1460-9568.1998.00086.x [DOI] [PubMed] [Google Scholar]
- Schmahmann JD, Pandya DN, Wang R, et al. : Association fibre pathways of the brain: parallel observations from diffusion spectrum imaging and autoradiography. Brain. 2007;130(Pt 3):630–653. 10.1093/brain/awl359 [DOI] [PubMed] [Google Scholar]
- Schwartz MF, Kimberg DY, Walker GM, et al. : Anterior temporal involvement in semantic word retrieval: voxel-based lesion-symptom mapping evidence from aphasia. Brain. 2009;132(Pt 12):3411–3427. 10.1093/brain/awp284 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schrenk F, Bromage TG, Betzler CG, et al. : Oldest Homo and Pliocene biogeography of the Malawi Rift. Nature. 1993;365(6449):833–836. 10.1038/365833a0 [DOI] [PubMed] [Google Scholar]
- Scott SK, Blank CC, Rosen S, et al. : Identification of a pathway for intelligible speech in the left temporal lobe. Brain. 2000;123(Pt 12):2400–2406. 10.1093/brain/123.12.2400 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Selnes OA, Knopman DS, Niccumm N, et al. : The critical role Wernicke’s area in sentence repetition. Ann Neurol. 1985;17(6):549–557. 10.1002/ana.410170604 [DOI] [PubMed] [Google Scholar]
- Seltzer B, Pandya DN: Further observations on parieto-temporal connections in the rhesus monkey. Exp Brain Res. 1984;55(2):301–312. 10.1007/BF00237280 [DOI] [PubMed] [Google Scholar]
- Seltzer B, Pandya DN: Further observations on parieto-temporal connections in the rhesus monkey. Exp Brain Res. 1984;55(2):301–312. 10.1007/BF00237280 [DOI] [PubMed] [Google Scholar]
- Shriberg LD, Ballard KJ, Tomblin JB, et al. : Speech, prosody, and voice characteristics of a mother and daughter with a 7;13 translocation affecting FOXP2. J Speech Lang Hear Res. 2006;49(3),500–525. 10.1044/1092-4388(2006/038) [DOI] [PubMed] [Google Scholar]
- Shu W, Cho JY, Jiang Y, et al. : Altered ultrasonic vocalization in mice with a disruption in the Foxp2 gene. Proc Natl Acad Sci U S A. 2005;102(27):9643–9648. 10.1073/pnas.0503739102 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shultz S, Vouloumanos A, Pelphrey K: The superior temporal sulcus differentiates communicative and noncommunicative auditory signals. J Cogn Neurosci. 2012;24(5):1224–1232. 10.1162/jocn_a_00208 [DOI] [PubMed] [Google Scholar]
- Sia GM, Clem RL, Huganir RL: The Human Language-Associated Gene SRPX2 Regulates Synapse Formation and Vocalization in Mice. Science. 2013.342(6161),987–991. 10.1126/science.1245079 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Simões CS, Vianney PVR, de Moura MM, et al. : Activation of frontal neocortical areas by vocal production in marmosets. Front Integr Neurosci. 2010;4:123. 10.3389/fnint.2010.00123 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith KR, Hsieh IH, Saberi K, et al. : Auditory spatial and object processing in the human planum temporale: no evidence for selectivity. J Cogn Neurosci. 2010a;22(4):632–639. 10.1162/jocn.2009.21196 [DOI] [PubMed] [Google Scholar]
- Smith DV, Davis B, Niu K, et al. : Spatial attention evokes similar activation patterns for visual and auditory stimuli. J Cogn Neurosci. 2010b;22(2):347–361. 10.1162/jocn.2009.21241 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Square PA, Roy EA, Martin RE: Apraxia of speech: Another form of praxis disruption.In Apraxia: The neuropsychology of action.Psychology Press.1997;173–206. Reference Source [Google Scholar]
- Steinschneider M, Volkov IO, Fishman YI, et al. : Intracortical responses in human and monkey primary auditory cortex support a temporal processing mechanism for encoding of the voice onset time phonetic parameter. Cereb Cortex. 2004;15(2):170–186. 10.1093/cercor/bhh120 [DOI] [PubMed] [Google Scholar]
- Stepien LS, Cordeau JP, Rasmussen T: The effect of temporal lobe and hippocampal lesions on auditory and visual recent memory in monkeys. Brain. 1960470–489. 10.1093/brain/83.3.470 [DOI] [Google Scholar]
- Stewart L, Walsh V, Frith U, et al. : TMS produces two dissociable types of speech disruption. Neuroimage. 2001;13(3):472–478. 10.1006/nimg.2000.0701 [DOI] [PubMed] [Google Scholar]
- Stewart L, von Kriegstein K, Warren JD, et al. : Music and the brain: disorders of musical listening. Brain. 2006;129(Pt 10):2533–2553. 10.1093/brain/awl171 [DOI] [PubMed] [Google Scholar]
- Stricanne B, Andersen RA, Mazzoni P: Eye-centered, head-centered, and intermediate coding of remembered sound locations in area LIP. J Neurophysiol. 1996;76(3):2071–2076. [DOI] [PubMed] [Google Scholar]
- Striem-Amit E, Hertz U, Amedi A: Extensive cochleotopic mapping of human auditory cortical fields obtained with phase-encoding fMRI. PLoS One. 2011;6(3):e17832. 10.1371/journal.pone.0017832 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Strominger NL, Oesterreich RE, Neff WD: Sequential auditory and visual discriminations after temporal lobe ablation in monkeys. Physiol Behav. 1980;24(6):1149–1156. 10.1016/0031-9384(80)90062-1 [DOI] [PubMed] [Google Scholar]
- Studdert-Kennedy M: How did language go discrete? Language Origins: Perspectives on Evolution. 2005;48 Reference Source [Google Scholar]
- Sugiura H: Matching of acoustic features during the vocal exchange of coo calls by Japanese macaques. Anim Behav. 1998;55(3):673–687. 10.1006/anbe.1997.0602 [DOI] [PubMed] [Google Scholar]
- Sutton D, Larson C, Lindeman RC: Neocortical and limbic lesion effects on primate phonation. Brain Res. 1974;71(1):61–75. 10.1016/0006-8993(74)90191-7 [DOI] [PubMed] [Google Scholar]
- Sweet RA, Dorph-Petersen KA, Lewis DA: Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus. J Comp Neurol. 2005;491(3):270–289. 10.1002/cne.20702 [DOI] [PubMed] [Google Scholar]
- Symmes D, Biben M: Maternal recognition of individual infant squirrel monkeys from isolation call playbacks. Am J Primatol. 1985;9(1):39–46. 10.1002/ajp.1350090105 [DOI] [PubMed] [Google Scholar]
- Taglialatela JP, Savage-Rumbaugh S, Baker LA: Vocal production by a language-competent Pan paniscus. Int J Primatol. 2003;24(1):1–17. 10.1023/A:1021487710547 [DOI] [Google Scholar]
- Tobias PV: The brain of Homo habilis: A new level of organization in cerebral evolution. J Hum Evol. 1987;16(7–8):741–761. 10.1016/0047-2484(87)90022-4 [DOI] [Google Scholar]
- Towle VL, Yoon HA, Castelle M, et al. : ECoG gamma activity during a language task: differentiating expressive and receptive speech areas. Brain. 2008;131(8):2013–2027. 10.1093/brain/awn147 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tanné-Gariépy J, Rouiller EM, Boussaoud D: Parietal inputs to dorsal versus ventral premotor areas in the macaque monkey: evidence for largely segregated visuomotor pathways. Exp Brain Res. 2002;145(1):91–103. 10.1007/s00221-002-1078-9 [DOI] [PubMed] [Google Scholar]
- Tata MS, Ward LM: Early phase of spatial mismatch negativity is localized to a posterior “where” auditory pathway. Exp Brain Res. 2005a;167(3):481–486. 10.1007/s00221-005-0183-y [DOI] [PubMed] [Google Scholar]
- Tata MS, Ward LM: Spatial attention modulates activity in a posterior “where” auditory pathway. Neuropsychologia. 2005b;43(4):509–516. 10.1016/j.neuropsychologia.2004.07.019 [DOI] [PubMed] [Google Scholar]
- Tian B, Reser D, Durham A, et al. : Functional specialization in rhesus monkey auditory cortex. Science. 2001;292(5515):290–293. 10.1126/science.1058911 [DOI] [PubMed] [Google Scholar]
- Tsunada J, Lee JH, Cohen YE: Representation of speech categories in the primate auditory cortex. J Neurophysiol. 2011;105(6):2634–2646. 10.1152/jn.00037.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Turken AU, Dronkers NF: The neural architecture of the language comprehension network: converging evidence from lesion and connectivity analyses. Front Syst Neurosci. 2011;5:1–20. 10.3389/fnsys.2011.00001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ungerleider LG, Haxby JV: ‘What’ and ‘where’ in the human brain. Curr Opin Neurobiol. 1994;4(2):157–165. 10.1016/0959-4388(94)90066-3 [DOI] [PubMed] [Google Scholar]
- Ulrich G: Interhemispheric functional relationships in auditory agnosia. An analysis of the preconditions and a conceptual model. Brain Lang. 1978;5(3):286–300. 10.1016/0093-934X(78)90027-5 [DOI] [PubMed] [Google Scholar]
- Vaadia E, Benson DA, Hienz RD, et al. : Unit study of monkey frontal cortex: active localization of auditory and of visual stimuli. J Neurophysiol. 1986;56(4):934–952. [DOI] [PubMed] [Google Scholar]
- Vanderhorst VG, Terasawa E, Ralston HJ: Monosynaptic projections from the nucleus retroambiguus region to laryngeal motoneurons in the rhesus monkey. Neuroscience. 2001;107(1):117–125. 10.1016/S0306-4522(01)00343-8 [DOI] [PubMed] [Google Scholar]
- Vanderhorst VG, Terasawa E, Ralston HJ, et al. : Monosynaptic projections from the lateral periaqueductal gray to the nucleus retroambiguus in the rhesus monkey: implications for vocalization and reproductive behavior. J Comp Neurol. 2000;424(2):251–268. [DOI] [PubMed] [Google Scholar]
- Viceic D, Fornari E, Thiran JP, et al. : Human auditory belt areas specialized in sound recognition: a functional magnetic resonance imaging study. Neuroreport. 2006;17(16):1659–1662. 10.1097/01.wnr.0000239962.75943.dd [DOI] [PubMed] [Google Scholar]
- Vigneau M, Beaucousin V, Hervé PY, et al. : Meta-analyzing left hemisphere language areas: phonology, semantics, and sentence processing. Neuroimage. 2006;30(4):1414–1432. 10.1016/j.neuroimage.2005.11.002 [DOI] [PubMed] [Google Scholar]
- Vignolo LA, Boccardi E, Caverni L: Unexpected CT-scan findings in global aphasia. Cortex. 1986;22(1):55–69. 10.1016/S0010-9452(86)80032-6 [DOI] [PubMed] [Google Scholar]
- Wallace MN, Johnston PW, Palmer AR: Histochemical identification of cortical areas in the auditory region of the human brain. Exp Brain Res. 2002;143(4):499–508. 10.1007/s00221-002-1014-z [DOI] [PubMed] [Google Scholar]
- Warren JD, Griffiths TD: Distinct mechanisms for processing spatial sequences and pitch sequences in the human auditory brain. J Neurosci. 2003;23(13),5799–5804. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Warren JD, Scott SK, Price CJ, et al. : Human brain mechanisms for the early analysis of voices. Neuroimage. 2006;31(3),1389–1397. 10.1016/j.neuroimage.2006.01.034 [DOI] [PubMed] [Google Scholar]
- Warren JD, Uppenkamp S, Patterson RD, et al. : Separating pitch chroma and pitch height in the human brain. Proc Natl Acad Sci U S A. 2003;100(17):10038–10042. 10.1073/pnas.1730682100 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Warren JD, Zielinski BA, Green GGR, et al. : Perception of sound-source motion by the human brain. Neuron. 2002;34(1):139–148. 10.1016/S0896-6273(02)00637-2 [DOI] [PubMed] [Google Scholar]
- Warren JE, Wise RJS, Warren JD: Sounds do-able: auditory-motor transformations and the posterior temporal plane. Trends Neurosci. 2005;28(12):636–643. 10.1016/j.tins.2005.09.010 [DOI] [PubMed] [Google Scholar]
- Watkins KE, Dronkers NF, Vargha-Khadem F: Behavioural analysis of an inherited speech and language disorder: comparison with acquired aphasia. Brain. 2002;125(Pt 3):452–464. 10.1093/brain/awf058 [DOI] [PubMed] [Google Scholar]
- Wernicke C, Tesak J: Der aphasische Symptomenkomplex. Springer Berlin Heidelberg.1974;1–70. 10.1007/978-3-642-65950-8_1 [DOI] [Google Scholar]
- Wich SA, Swartz KB, Hardus ME, et al. : A case of spontaneous acquisition of a human sound by an orangutan. Primates. 2008;50(1):56–64. 10.1007/s10329-008-0117-y [DOI] [PubMed] [Google Scholar]
- Wise RJ, Scott SK, Blank SC, et al. : Separate neural subsystems within ‘Wernicke’s area’. Brain. 2001;124(Pt 1):83–95. 10.1093/brain/124.1.83 [DOI] [PubMed] [Google Scholar]
- Wood B, Baker J: Evolution in the Genus Homo. Annu Rev Ecol Evol Syst. 2011;42(1):47–69. 10.1146/annurev-ecolsys-102209-144653 [DOI] [Google Scholar]
- Wood B, Richmond BG: Human evolution: taxonomy and paleobiology. J Anat. 2000;197(Pt 1):19–60. 10.1046/j.1469-7580.2000.19710019.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Woods DL, Herron TJ, Cate AD, et al. : Functional properties of human auditory cortical fields. Front Syst Neurosci. 2010;4:155. 10.3389/fnsys.2010.00155 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Woods TM, Lopez SE, Long JH, et al. : Effects of stimulus azimuth and intensity on the single-neuron activity in the auditory cortex of the alert macaque monkey. J Neurophysiol. 2006;96(6):3323–3337. 10.1152/jn.00392.2006 [DOI] [PubMed] [Google Scholar]
- Yin P, Mishkin M, Sutter M, et al. : Early stages of melody processing: stimulus-sequence and task-dependent neuronal activity in monkey auditory cortical fields A1 and R. J Neurophysiol. 2008;100(6):3009–3029. 10.1152/jn.00828.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zaidel E: Auditory vocabulary of the right hemisphere following brain bisection or hemidecortication. Cortex. 1976;12(3):191–211. 10.1016/S0010-9452(76)80001-9 [DOI] [PubMed] [Google Scholar]
- Zatorre RJ, Bouffard M, Ahad P, et al. : Where is ‘where’ in the human auditory cortex? Nat Neurosci. 2002;5(9):905–909. 10.1038/nn904 [DOI] [PubMed] [Google Scholar]
- Zatorre RJ, Bouffard M, Belin P: Sensitivity to auditory object features in human temporal neocortex. J Neurosci. 2004;24(14):3637–3642. 10.1523/JNEUROSCI.5458-03.2004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang SP, Davis PJ, Bandler R, et al. : Brain stem integration of vocalization: role of the midbrain periaqueductal gray. J Neurophysiol. 1994;72(3):1337–1356. [DOI] [PubMed] [Google Scholar]
- Zwiers MP, Van Opstal AJ, Paige GD: Plasticity in human sound localization induced by compressed spatial vision. Nat Neurosci. 2003;6(2):175–181. 10.1038/nn999 [DOI] [PubMed] [Google Scholar]