Abstract
Individual recognition is critical in social animal communication, but it has not been demonstrated for a complete vocal repertoire. Deciphering the nature of individual signatures across call types is necessary to understand how animals solve the problem of combining, in the same signal, information about identity and behavioral state. We show that distinct signatures differentiate zebra finch individuals for each call type. The distinctiveness of these signatures varies: contact calls bear strong individual signatures while calls used during aggressive encounters are less individualized. We propose that the costly solution of using multiple signatures evolved because of the limitations of the passive filtering properties of the birds’ vocal organ for generating sufficiently individualized features. Thus, individual recognition requires the memorization of multiple signatures for the entire repertoire of conspecifics of interests. We show that zebra finches excel at these tasks.
Subject terms: Signal processing, Animal behaviour
Individual animals have vocal signatures, but are the same signatures consistent across behavioral contexts? Here, the authors use behavioral experiments and acoustic analyses to show that zebra finches have distinct vocal signatures for different call types, such as aggression and long-distance contact.
Introduction
Humans share the ability to recognize familiar individuals using vocal-cues with many other animal species that rely on individual-specific relationships, such as mated pair bonds, the mother−young bond, or the tolerance relationship developed by neighbor territorial animals (e.g. Birds1–3; Mammals4–7; Amphibians8,9; reviewed in refs. 10–12). Social animals also have complex vocal repertoires comprised of multiple call types that they use to communicate different behavioral states (e.g. alarm, hunger, need for contact). Investigation of individual vocal recognition in animals, including humans, has demonstrated individual discrimination either in single-call types (e.g. ref. 13.) or in a few call types7,14–21 but yet, it is unknown whether individual recognition is present for an entire vocal repertoire or, for humans, irrespective of the vocalization produced may it be laughing, speaking, shouting or whispering22. If vocal recognition was to exist throughout a repertoire, it would provide a unique opportunity to study the nature of the individual signature given the particular physical constrains of sound production and the ecological pressure to also produce distinctive multiple call types.
The percept of the human voice results in part from the combination of fundamental frequency modulations determined by the properties of the vocal source, and of specific spectral shaping by the vocal tract12,23. Individual information in animals could similarly be the result of morphological individual differences in the vocal apparatus that would cause similar passive shaping of the sound across all calls in one’s repertoire: the passive-voice-cues24. In support of that hypothesis, vocal tract resonances have been found to provide reliable cues to an individual’s identity in some calls of nonhuman mammals14,15,24,25. Alternatively, or, in addition to passive-voice-cues, humans and animals can actively control their vocal organ not only to produce the different utterances or call types that carry different meanings26, but also to either exaggerate their voice features, active-voice-cues (red deer27, humans28), or to implicitly advertise one’s identity, signature cues (Dolphins’ whistles5,29, and humans’ stereotypical volitional laughing bouts30). The nonexclusive passive-voice, active-voice, and signatures strategies have yet to be explored when studying individual recognition across a repertoire. Here, we will contrast the use of passive-voice-cues (called simply voice from now on) to signature cues.
The zebra finch is a highly vocal social songbird that relies on a set of 11 call types to communicate different behavioral states, intents or needs31,32. Because the coding of identity has been shown independently for some of these call types (Begging calls33,34; Distance calls1,32,35–41; Song42–45 and soft calls46,47 but see ref. 48 for Long Tonal calls), it is a good model to investigate mechanisms of identity coding through a whole repertoire. Here, using operant conditioning design to measure behavioral discriminability and acoustic analyses to determine the features that carry individual information, we perform a thorough comparison of individual signatures across the repertoire of the zebra finch. We find that zebra finches do not produce signatures that generalize across call types but instead generate call-type-specific individual signatures. The strength of this signature varies across call types and is stronger for social contact calls. Our behavioral experiments show that birds are able to discriminate the identity of vocalizers for all calls of the adult repertoires and for the two call types unique to juveniles. Birds are also able to identify a vocalizer irrespective of call type, a task that requires the memorization of a set of vocal signatures.
Results
Experimental design
We used a modified go−no go task to test whether birds can discriminate the identity of vocalizers. In this task, birds maximize a food reward by interrupting the playback of vocalizations from a nonrewarded (NoRe) vocalizer and refraining to interrupt the playback of vocalizations from a rewarded (Re) vocalizer (Fig. 1). Re and NoRe vocalizations can correspond to two different stimuli (shaping: e.g. a single song of bird A vs. single song of bird B), two sets of renditions of one call type for two different birds (single-call-type test: e.g. randomly chosen renditions of Distance Calls from bird A vs. randomly chosen renditions of Distance Calls from bird B), two sets of renditions from all call types mixed together (all-call-type tests: random renditions of calls from the repertoire of bird A vs. random renditions of calls from the repertoire of bird B). These learning and discrimination tasks are progressively more complex and subject birds participated in all three in sequence (Fig. 2). We tested the discrimination for male vocalizers, female vocalizers, and juvenile vocalizers in randomized order. Finally, the all-call-type tests were performed with calls from vocalizers that subjects had already discriminated in single-call-type tests as well as with calls from unfamiliar vocalizers.
In parallel, we performed bioacoustical analyses and supervised classification to assess the discriminability of vocalizers (see Methods) based on the information present in the acoustic signal as it has been done in other species (e.g. refs. 15,49.). More specifically, 18 predefined acoustic features (PAF) along with the spectrograms were used to characterize the acoustic properties of vocalizations and three types of regularized classifiers were used to investigate the discriminability of ID in vocalizations: linear discriminant analysis (LDA), quadratic discriminant analysis (QDA) and random forest (RF). These analyses were performed for single-call-types as well as for all-call-types grouped together.
Vocalizer discrimination for each call type
Figure 3 illustrates the individual signature we found in two call types: the Wsst call (Ws), used in aggressive behaviors, and the Distance call (DC), used to establish contact at distance. Acoustic features become identity cues when they vary between vocalizations emitted by different individuals but are consistent between renditions emitted by the same individual. The spectrograms depict the contrast of rendition-consistency and vocalizer-idiosyncrasy between call types: acoustic features are more individualized in DC than in Ws calls. Despite the lack of apparent identity coding in Ws calls, the performance of a classifier demonstrates that vocalizer discrimination is above chance level for both call types and that DC are highly individualized. The behavioral performance of a zebra finch demonstrates the perception of these individual signatures in both call types and confirms their difference in strength: vocalizer discrimination is significantly above chance for both but the behavioral performance is four times higher for DC than for Ws calls (Fig. 3).
We further investigated vocalizer discrimination for each of the call types in the zebra finch’s repertoire. A multiple pair-wise discrimination approach was used for the classifier so that its performance could be compared between call types for which we had a variable number of vocalizers, as well as to behavioral performance measured in the operant task. For each of the call types, a large and significant majority of the pairs of vocalizers tested were discriminated above chance level (Fig. 4a). Classifiers using other feature spaces and nonlinear or noncontinuous decision boundaries yielded similar performance (Supplementary Fig. 1). Thus, an acoustical individual signature is present in all call types and the operant task shows that zebra finches can use this information to discriminate vocalizers irrespective of the call type (Fig. 4b and Supplementary Table 1).
The strength of individual signatures depended on call types. Classifier and behavioral performances were better for vocalizations used to establish contact (DC, LT, and Te; see Fig. 4 for call labels) and for categories used to sound the alarm (Tu and Th) than vocalizations communicating aggressive intents (Ws calls; Fig. 4a, b). The strength of individual signature can be particularly impressive for contact calls: with DC, the classifier reaches 100% correct classification on cross-validation data (testing set: n = 3514 calls) for 64/171 pairs of vocalizers, and the average classification of these 171 vocalizer pairs was 95%. Thus, birds produce strong individual signatures in contact calls which could ensure an acoustic network is maintained irrespective of the distance between individuals and of the access to visual cues40,41. This strong individual signature also allows for the recognition of multiple individuals (as illustrated by the Discriminant Analysis shown in Fig. 3b). Therefore, in addition to identifying their mate, zebra finches could use Distance call to identify multiple individuals. This multiplicity of individual recognition remains to be demonstrated in a natural context11.
We also investigated if the difference in behavioral performance between call types could be explained by differences in the acoustical signature as revealed by the classifier. Figure 4c plots the behavioral performance of subjects against the classifier performance for the same pairs of vocalizers and reveals a small but significant correlation between the two. Discrepancies between classifier and behavioral performance could be due to (1) behavioral noise, (2) differences in the actual renditions for subject pairs that are used in the behavioral test vs. the classifier cross-validated dataset (both generated randomly) and (3) subjects emphasizing or using different acoustical features than the classifier. Note, however, that classifiers using spectrograms as feature spaces and nonlinear or noncontinuous decision boundaries yielded similar machine performance as those using PAF and LDA as reported in Fig. 3a (see Supplementary Fig. 1).
Generalization across renditions from the same call type
Since tests are composed of many trials, we tested whether subjects learn the discrimination by memorizing some aspect of each individual rendition or are able to generalize across renditions. Increases in behavioral performance over the course of single-call-type tests (Fig. 5a) demonstrate that birds learn to perform the discrimination task better later in the day for most call types (Likelihood ratio test or LRT on Generalized linear mixed effect model or GLME, χ22 = 308.7, p = 0; see Supplementary Table 1 for all statistical results). To test the hypothesis that improvement in discrimination was not due to birds learning and memorizing each individual rendition as being respectively rewarded or not rewarded, we examined the following predictions: (1) birds should discriminate above chance at the beginning of the daily test (first 30-min); (2) their behavioral performance should be significant on renditions they hear for the first time (which can be thought of as catch trials); and (3) for a given trial, their interruption behavior should not be better predicted by the number of times they have heard this particular rendition than the number of times they have heard this call type from this vocalizer. The behavioral data show that these three predictions are verified. First, birds discriminated vocalizers above chance in the first 30 min (Fig. 5a; LRT on GLME, χ21 = 270.9, p = 0). Second, birds scored higher than chance on stimuli that were heard only once or twice (Fig. 5b; LRT on GLME, χ21 = 15.8, p = 7.0264 × 10−5). Note that because these first and second renditions most likely appear during the first 30 min, showing a significant discrimination on these stimuli demonstrates generalization almost from the beginning of the test. Finally, the improvement in the correct behavioral response (interrupting NoRe and not interrupting Re) was fully predicted by the number of times birds have heard vocalizations from each vocalizer and there was no additional effect from the number of times they have heard each particular rendition (LRT on GLME, effect of VocRank χ21 = 401.1, p = 0; effect of RendRank with an offset based on VocRank; χ21 = 3.2, p = 0.075258). These results show that birds learn to generalize across renditions.
Generalization across all call types
After demonstrating the existence of individual signatures in every call type and their discriminability when tested independently, we investigated if birds could still identify vocalizers in a task that randomly mixed call types in the same test (all-call-type test, Figs. 6, 7). Despite the difficulty of the task given the number (Supplementary Table 2) and diversity of vocalizations, subjects tested with familiar vocalizers (Fig. 6a) achieved significant discrimination of same-sex vocalizers in 24 out of 26 tests (black dots in Fig. 6b) and generalized across renditions (Fig. 7a, b). Although these subjects had gained experience with the same vocalizers during the preceding single-call-type tests, this familiarity could not solely explain the results: similar behavioral performances were obtained with subjects naive to the pair of vocalizers (Figs. 6c, d; 7c, d and Supplementary Fig. 2). The behavioral performance depended on the call type and was significantly above chance for all types except Song (Figs. 6b, d; 7b, d). Song was also the least interrupted call type (Supplementary Fig. 3) and we hypothesize that the decrease in interruption reflects an intrinsic preference to listen to complete songs which interferes with the birds’ performance.
Are birds using voice-cues shared between call types to perform this difficult task? To address this question, we investigated the generalization performance of classifiers to explore to what degree identity-bearing acoustic features are shared between call types. We found that the performance of the classifier dropped dramatically when it was trained and tested on different call types (Fig. 8). Similar effects are found when using other classifiers and/or the full spectrogram as an input feature space (Supplementary Fig. 1B). In summary, voice-cues are very weak in zebra finches. To achieve the significant discrimination of vocalizers in the very difficult task of the all-call-type tests, bird subjects might use some of the weak vocal-cues but, very likely, are also memorizing individual vocal signatures in particular for calls that are highly individualized (e.g. Tet, Songs, DC; Figs. 6 and 8). Taken together, these results demonstrate that birds can quickly (<100 trials) learn the set of signatures unique to each bird to identify vocalizers irrespective of the call type they produce.
Discussion
Zebra finches can discriminate vocalizers based on any calls in the repertoire and do so by memorizing sets of individual signatures, one for each call type of the repertoire. Individual recognition across multiple call types could be useful, for example, to assess the reliability of the information provided by specific individuals7 or to promote general collaborative or avoidant behavior with particular individuals.
Why have zebra finches evolved a system for individual recognition based on multiple signatures rather than passive-voice-cues that should be easier to produce, perceive and memorize? Voice-cues are the results, for a large part, of individual differences in the morphology of the vocal apparatus that affect all produced sounds. For example, differences in size, weight and age are reflected in the size of the vocal tract which, in turn, result in strongly individualized voice-cues (e.g. ref. 15.). In humans, the average spacing between formant frequencies across vowels is indicative of the sex and size of the individual. The variation in the vocal tract morphology between individuals is most likely the origin of that correlation12,28. For zebra finches, however, vocal tract filtering is not a reliable marker of identity. The individual variation of vocal tract length observed in adult zebra finches is between 28 and 33 mm and correspond to resonances in a very restricted frequency range of ~ 1 kHz around 5.5 kHz (or less with closed beak50). Not only is this range of passive resonance frequency small, but it also falls both within the range of resonant frequencies (3−10 kHz) that can be actively modulated by changing the opening of the beak or the size of the oro-oesopharyngal cavity51 and within the range used to encode distinct call types31. Indeed, zebra finches encode information in the spectral shape of their vocalizations with distinct peaks or formants ranging from 500 Hz to 8 kHz for each call type31,52. Contrary to humans, where speaker invariant vowel information is encoded mostly in the relative values between the first and second formant frequencies28,53, zebra finches seem to code information with relatively precise frequency values of their resonant peaks31. We propose that because of the limited bandwidth of frequencies provided by small variations in size across animals relative to the variation produced across call types, what could be a reliable indicator of the bird size and individuality is masked by variation of the same acoustic parameter within a bird. This hypothesis provides an explanation on how the socio-ecological pressures of sounding different, while producing any call types in the repertoire and with the constraint of being a small animal, have pushed the individual signature in birds away from passive spectral shaping produced by the individualized morphology of their vocal organ. A similar hypothesis has been proposed for the evolution of identity bearing whistles in dolphins. They would use a signaling strategy based on signatures because passive-voice-cues become unreliable due to the pressure changing as the animals communicate at different depths29.
If spectral shaping is not a reliable cue for identity, it is then expected that information about identity would rely more on temporal and fundamental frequency parameters than the information about call types. Performing a matched comparison of the information in spectral, temporal or fundamental features for discriminating vocalizers vs. call types, we show that the spectral feature space was the best subset of acoustical parameters for classifying call types (also in ref. 39) and that temporal and fundamental features were further recruited for the discrimination of vocalizers (Fig. 8c). This relative increase in the weight of fundamental and temporal features to encode identity information was particularly salient for contact and alarm calls (Supplementary Fig. 4). Analyzing the transfer of identity-bearing acoustic cues across call types for each subset of features shows that while temporal features mostly act as call-type-specific signature cues, spectral and fundamental features seem to be shared to some extent between types (voice-cues; Supplementary Fig. 4 and 5).
What are the mechanisms that generate this individual signature? Just as for spectral parameter variation, fundamental and temporal signatures could be due to the individual differences in the morphology of the vocal organ, vocal tract or respiratory system. Yet, these acoustic features are the least correlated with morphological parameters39. A nonexclusive hypothesis is that these idiosyncratic differences are created by neural control. In this hypothesis, a set of individualized motor programs (one for each call type) would generate the temporal and fundamental variations carrying the vocalizer identity. While these individualized motor programs could be genetically inherited, vocal learning could further modify or overwrite these programs for some of the call types. For example, when learning their Distance call by imitating a model, male zebra finches overwrite their innate template while females do not54. It is highly probable that learning by imitation plays an important role in generating Distance calls that vary across birds while enforcing a very low within bird variability. Birds could also learn to avoid particular features that are already used as individual signatures by other familiar conspecifics. Further experiments comparing male and female vocal identity coding, and investigating the effect of learning could reveal the interplay between hardwired motor programs and learned plastic motor programs. The crucial role that vocal learning in zebra finches could play in the production of highly individualized vocal signatures supports the recent hypothesis proposed by Pisanski et al.28 to explain the origin of plasticity in early hominoid speech. Sounding different or similar to another individual could have motivated the evolution of the plastic control of vocalizations used by group-living animals55 which, in humans, could have been further exploited to generate the linguistic content of speech sounds. The voice plasticity necessary to shape one’s identity could be one of the crucial first steps in the evolution of language that could be shared across a few social and vocal vertebrates.
Methods
Subjects
Fifty-eight domestic zebra finches (Taeniopygia guttata) were used for this study. Thirteen adults (7 females and 6 males; aged 11.5 ± 2.3 SEM month-old) were used as subjects for the behavioral experiments and 45 birds (adults: 13 females and 14 males aged 17.6 ± 5.1 month-old; chicks: 7 females, 9 males and 2 of unknown sex, aged 20.3 ± 0.9 day-old) were used to constitute the vocalization databank used as stimuli (previously described in ref. 31). Adult birds were issued from three different colonies housed either at UC Berkeley or UC San Francisco. All experiments were approved by the Animal Care and Use Committee at the University of California at Berkeley.
Behavioral experiments
To investigate the ability of adult zebra finches to discriminate vocalizers based on acoustic cues, we used a forced-choice operant conditioning paradigm previously developed in our laboratory and described elsewhere (see refs. 40,56, Fig. 1 and Supplementary Movie 1, custom Matlab code available upon request to the authors). Briefly, birds pecked a pad to induce the playback of sequences of vocalizations from two unfamiliar conspecific birds. Each acoustic stimulus consisted of a sequence of six or three band-pass filtered (0.25−12 kHz) vocalizations of the same vocalizer and of the same call type, randomly assigned within a 6 s window. Vocalizations from one of the vocalizers were played infrequently (20% of the time) and rewarded by 10 s access to seeds that starts at the end of the ~6 s playback (Re Vocalizer) while the playback of vocalization sequences from the other vocalizer was frequent (80% of the trials) but not rewarded (NoRe Vocalizer). After initiating a trial (triggering the playback of a stimulus), the subject could either wait until the end of the playback or interrupt it by pecking again and, in this manner, initiate the following trial. The food access was only available after the playback of an Re Vocalizer call sequence and only if the subject waited until the end of the playback by refraining from interrupting: the interruption of the Re stimulus cancels the access to the reward. Because 80% of the trials would initiate the playback of NoRe Vocalizer call sequences while only 20% would initiate the playback of Re Vocalizer call sequences, subjects were expected to learn to specifically interrupt the NoRe Vocalizer and refrain from interrupting the Re Vocalizer in order to maximize their food access. This behavioral performance for the discrimination between the two vocalizers is quantified by the odds ratio (OR): the ratio between the odds of interrupting the NoRe vocalizer and the odds of interrupting the Re vocalizer.
1 |
Here is the odds of interrupting a NoRe stimulus, , the odds of interrupting an Re stimulus, , the probability of interrupting a NoRe stimulus and , the probability of interrupting an Re stimulus. The probabilities of interruption were calculated by dividing the number of interrupted stimuli of a given type by the total number of triggered stimuli of that type. OR was calculated as a time running value or as an average value for all the trials of a given day.
To motivate the subjects, birds were fasted at least 15 h prior to the beginning of the experiment. Experiments started between 0900 hours and 1000 hours each day and terminated between 1400 hours and 1500 hours (~5 h duration). The birds were maintained in a fasting state (85−90% of their initial body weight) for the entire experimental series by giving them only 1.5 g of finch seeds at the end of each testing day on top of the rewards earned during the tests. The day light cycle was 0700−2100 hours. This conditioning paradigm does not use negative reinforcement for incorrect responses and its rules are quickly acquired by subjects; 3−4 h of training involving only a few hundreds of pecks are sufficient to learn the procedure40,56.
For each subject, a random pair of males, a random pair of females, and a random pair of chicks were chosen from 24 vocalizers of our vocalization bank (7 females, 6 males, and 11 chicks). Each subject was then tested for its ability to discriminate these three pairs of vocalizers across all call types using different discrimination tasks. In the single-call-type tests, birds had to discriminate two male, two female or two chick vocalizers based on a single-call-type (7, 6, and 2 call types respectively, each tested on consecutive days; 15 tests total). In the all-call-type tests, birds had to discriminate two male or two female vocalizers based on all call types (up to 7 and 6 call types respectively tested at the same time; 2 tests total). To investigate the effect of the familiarity with vocalizations acquired during “single-call-type” tests on the behavioral performance of birds during “all-call-type” tests, seven female subjects run an additional set of “all-call-type” tests with vocalizers they had never heard before. Each test significance was tested by an exact test and the behavioral performance across subjects was evaluated with binomial Generalized Linear Mixed Effects models (GLME). See Supplementary Methods for further descriptions of the tasks and statistical analyses of behavioral performances.
Acoustical analysis
The acoustical analysis of vocalizations used the large database described in Elie and Theunissen31 that contains vocalizations from 27 adults (13 females and 14 males) and 18 juvenile zebra finches, annotated for vocalizer ID and call type. To quantify the individual signature present in these vocalizations, we used three supervised classifiers: LDA, QDA, and RF. The classifiers were trained to sort the vocalizations based on vocalizer ID using two distinct feature spaces to represent the sounds: PAF or an invertible spectrogram. The same classifiers based on the same PAF were also used to discriminate call types, repeating the analysis done in Elie and Theunissen31, to compare the importance of specific PAF for discriminating vocalizers vs. call types. The PAF consisted of 18 features describing the spectral (8), temporal (5), and fundamental (5) characteristics of each sound (see Supplementary Methods and also ref. 31). All acoustical features were obtained using custom Python code from the Theunissen lab (BioSound class in sound.py found in https://github.com/theunissenlab/soundsig; BioSound Tutorials with examples are found in https://github.com/theunissenlab/BioSoundTutorial).
All classifiers were trained and tested on separate data samples using up to tenfold cross-validation (min 5). Performance of the classifiers was quantified using percent correct classification on the cross-validated dataset. For the vocalizer classification shown in Fig. 3b for illustrative purposes, the classifier was trained to discriminate 11 vocalizers based on all the renditions from each of these birds for separate call types. For all other analyses on vocalizer discriminability, we trained and tested classifiers on all possible pair-wise comparisons. This pair-wise approach allowed us not only to directly compare the results to those obtained in the behavioral tests of vocalizer discrimination, but also to compare performance for classifying vocalizers across call types for which we had a different number of birds. Irrespective of the number of vocalizers, chance performance is based on pair-wise classification where 50% is chance level. Performance metrics for each pair of vocalizers and each classification test consisted in a binomial data (# of stimulus tested, # correctly classified) that was tested against chance using an exact binomial test. The significance of classifier performance across all vocalizers was tested by applying GLME on the number of cross-validated trials correctly classified vs. total number tested (see further details in Supplementary Methods).
Classification for vocalizers was estimated for each call type separately (with fitting and validation data coming from same call type) as well as for across call types. In the across-call-type classification, classifiers were tested on vocalizations drawn from one particular call type but either trained on vocalizations drawn from at least five out of the eight remaining adult call types (“Other” in Fig. 8a, Supplementary Fig. 1A and 5A) or from another call type (off-diagonal bins in Fig. 8b, Supplementary Fig. 1B and 5B). A minimum of five vocalizations from the same vocalizer per call type was required for that vocalizer to be included in these calculations.
Electronic supplementary material
Acknowledgements
We thank Nicolas Mathevon and Michael Yartsev for their comments on a previous version of this manuscript. We are also grateful to Michelle R. Carney, Timothy Day and Yuka Minton for their assistance with running behavioral experiments. This study was funded by the NSF IIS 131-1446 and the NIDCD R01-016783.
Author contributions
Conceptualization, methodology, software, formal analysis, writing: J.E.E. and F.E.T.; Investigation: J.E.E.; Funding acquisition: F.E.T.
Data availability
The custom Python code used to calculate the acoustical features is available as the BioSound class in sound.py found in https://github.com/theunissenlab/soundsig. Tutorials with example data are found in https://github.com/theunissenlab/BioSoundTutorial. The complete library of vocalizations is available at Figshare: 10.6084/m9.figshare.11905533.v1. The behavioral data and behavioral analysis code are available from the corresponding authors upon reasonable request.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Change history
6/4/2020
An amendment to this paper has been published and can be accessed via a link at the top of the paper.
Contributor Information
Julie E. Elie, Email: julie.elie@berkeley.edu
Frédéric E. Theunissen, Email: theunissen@berkeley.edu
Electronic supplementary material
Supplementary Information accompanies this paper at 10.1038/s41467-018-06394-9.
References
- 1.Vignal C, Mathevon N, Mottin S. Audience drives male songbird response to partner’s voice. Nature. 2004;430:448–451. doi: 10.1038/nature02645. [DOI] [PubMed] [Google Scholar]
- 2.Briefer E, Rybak F, Aubin T. When to be a dear enemy: flexible acoustic relationships of neighbouring skylarks, Alauda arvensis. Anim. Behav. 2008;76:1319–1325. [Google Scholar]
- 3.Jouventin P. Visual and vocal signals in penguins, their evolution and adaptive characters. Fortschr. Verhalt. 1982;24:148–148. [Google Scholar]
- 4.Charrier I, Mathevon N, Jouventin P. Mother’s voice recognition by seal pups—newborns need to learn their mother’s call before she can take off on a fishing trip. Nature. 2001;412:873–873. doi: 10.1038/35091136. [DOI] [PubMed] [Google Scholar]
- 5.Janik VM. Whistle matching in Wild Bottlenose Dolphins (Tursiops truncatus) Science. 2000;289:1355–1357. doi: 10.1126/science.289.5483.1355. [DOI] [PubMed] [Google Scholar]
- 6.Bergman TJ, Beehner JC, Cheney DL, Seyfarth RM. Hierarchical classification by rank and kinship in baboons. Science. 2003;302:1234–1236. doi: 10.1126/science.1087513. [DOI] [PubMed] [Google Scholar]
- 7.Cheney DL, Seyfarth RM. Assessment of meaning and the detection of unreliable signals by vervet monkeys. Anim. Behav. 1988;36:477–486. [Google Scholar]
- 8.Chuang MF, Kam YC, Bee MA. Territorial olive frogs display lower aggression towards neighbours than strangers based on individual vocal signatures. Anim. Behav. 2017;123:217–228. [Google Scholar]
- 9.Davis MS. Acoustically mediated neighbor recognition in the North American bullfrog, Rana catesbeiana. Behav. Ecol. Sociobiol. 1987;21:185–190. [Google Scholar]
- 10.Kondo N, Watanabe S. Contact calls: information and social function. Jpn. Psychol. Res. 2009;51:197–208. [Google Scholar]
- 11.Wiley RH. Specificity and multiplicity in the recognition of individuals: implications for the evolution of social behaviour. Biol. Rev. 2013;88:179–195. doi: 10.1111/j.1469-185X.2012.00246.x. [DOI] [PubMed] [Google Scholar]
- 12.Kreiman, J. & Sidtis, D. Foundations of Voice Studies: An Interdisciplinary Approach to Voice Production and Perception (Blackwell Publishing, Oxford, UK, 2011).
- 13.Aubin T, Jouventin P, Hildebrand C. Penguins use the two-voice system to recognize each other. Proc. Biol. Sci. 2000;261:1081–1087. doi: 10.1098/rspb.2000.1112. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Rendall D. Acoustic correlates of caller identity and affect intensity in the vowel-like grunt vocalizations of baboons. J. Acoust. Soc. Am. 2003;113:3390–3402. doi: 10.1121/1.1568942. [DOI] [PubMed] [Google Scholar]
- 15.Reby D, Andre-Obrecht R, Galinier A, Farinas G, Cargnelutti B. Cepstral coefficients and hidden markov models reveal idiosyncratic voice characteristics in red deer (Cervus elaphus) stags. J. Acoust. Soc. Am. 2006;120:4080–408. doi: 10.1121/1.2358006. [DOI] [PubMed] [Google Scholar]
- 16.Blumenrath SH, Dabelsteen T, Pedersen SB. Vocal neighbour–mate discrimination in female great tits despite high song similarity. Anim. Behav. 2007;73:789–796. [Google Scholar]
- 17.Beecher MD, Campbell SE, Burt JM. Song perception in the song sparrow: birds classify by song type but not by singer. Anim. Behav. 1994;47:1343–1351. [Google Scholar]
- 18.Levrero F, Mathevon N. Vocal signature in wild infant chimpanzees. Am. J. Primatol. 2013;75:324–332. doi: 10.1002/ajp.22108. [DOI] [PubMed] [Google Scholar]
- 19.Sibiryakova OV, et al. The power of oral and nasal calls to discriminate individual mothers and offspring in red deer, Cervus elaphus. Front. Zool. 2015;12:2. doi: 10.1186/s12983-014-0094-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Weary DM, Krebs JR. Great tits classify songs by individual voice characteristics. Anim. Behav. 1992;43:283–287. [Google Scholar]
- 21.Lavan N, Scott SK, McGettigan C. Impaired generalization of speaker identity in the perception of familiar and unfamiliar voices. J. Exp. Psychol.: General. 2016;145:1604–1614. doi: 10.1037/xge0000223. [DOI] [PubMed] [Google Scholar]
- 22.Lavan, N., Burton, A. M., Scott, S. K. & McGettigan, C. Flexible voices: identity perception from variable vocal signals. Psychon. Bull. Rev. doi: 10.3758/s13423-018-1497-7 (2018). [DOI] [PMC free article] [PubMed]
- 23.Doddington G. Speaker recognition: identifying people by their voices. Proc. IEEE. 1985;73:1651–1662. [Google Scholar]
- 24.Rendall D, Owren MJ, Rodman PS. The role of vocal tract filtering in identity cueing in rhesus monkey (Macaca mulatta) vocalizations. J. Acoust. Soc. Am. 1998;103:602–614. doi: 10.1121/1.421104. [DOI] [PubMed] [Google Scholar]
- 25.Furuyama T, Kobayasi KI, Riquimaroux H. Role of vocal tract characteristics in individual discrimination by Japanese macaques (Macaca fuscata) Sci. Rep. 2016;6:32042. doi: 10.1038/srep32042. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Cheney DL, Seyfarth RM. Flexible usage and social function in primate vocalizations. Proc. Natl Acad. Sci. USA. 2018;115:1974–1979. doi: 10.1073/pnas.1717572115. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Reby D, et al. Red deer stags use formants as assessment cues during intrasexual agonistic interactions. Proc. Biol. Sci. 2005;272:941–947. doi: 10.1098/rspb.2004.2954. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Pisanski K, Cartei V, McGettigan C, Raine J, Reby D. Voice modulation: a window into the origins of human vocal control? Trends Cogn. Sci. 2016;20:304–318. doi: 10.1016/j.tics.2016.01.002. [DOI] [PubMed] [Google Scholar]
- 29.Sayigh LS, Wells RS, Janik VM. What’s in a voice? Dolphins do not use voice cues for individual recognition. Anim. Cogn. 2017;20:1067–1079. doi: 10.1007/s10071-017-1123-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Lavan N, Short B, Wilding A, McGettigan C. Impoverished encoding of speaker identity in spontaneous laughter. Evol. Hum. Behav. 2018;39:139–145. [Google Scholar]
- 31.Elie JE, Theunissen FE. The vocal repertoire of the domesticated zebra finch: a data-driven approach to decipher the information-bearing acoustic features of communication signals. Anim. Cogn. 2016;19:285–315. doi: 10.1007/s10071-015-0933-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Zann, R. A. The Zebra Finch: A Synthesis of Field and Laboratory Studies (Oxford University Press, Oxford, UK, 1996).
- 33.Ligout S, Dentressangle F, Mathevon N, Vignal C. Not for parents only: begging calls allow nest-mate discrimination in juvenile zebra finches. Ethology. 2016;122:193–206. [Google Scholar]
- 34.Levrero F, Durand L, Vignal C, Blanc A, Mathevon N. Begging calls support offspring individual identity and recognition by zebra finch parents. C. R. Biol. 2009;332:579–589. doi: 10.1016/j.crvi.2009.02.006. [DOI] [PubMed] [Google Scholar]
- 35.Vignal C, Mathevon N, Mottin S. Mate recognition by female zebra finch: analysis of individuality in male call and first investigations on female decoding process. Behav. Process. 2008;77:191–198. doi: 10.1016/j.beproc.2007.09.003. [DOI] [PubMed] [Google Scholar]
- 36.Mulard H, Vignal C, Pelletier L, Blanc A, Mathevon N. From preferential response to parental calls to sex-specific response to conspecific calls in juvenile zebra finches. Anim. Behav. 2010;80:189–195. [Google Scholar]
- 37.Jacot A, Reers H, Forstmeier W. Individual recognition and potential recognition errors in parent–offspring communication. Behav. Ecol. Sociobiol. 2010;64:1515–1525. [Google Scholar]
- 38.Vicario DS, Naqvi NH, Raksin JN. Sex differences in discrimination of vocal communication signals in a songbird. Anim. Behav. 2001;61:805–817. [Google Scholar]
- 39.Forstmeier W, Burger C, Temnow K, Deregnaucourt S. The genetic basis of zebra finch vocalizations. Evolution. 2009;63:2114–2130. doi: 10.1111/j.1558-5646.2009.00688.x. [DOI] [PubMed] [Google Scholar]
- 40.Mouterde SC, Elie JE, Theunissen FE, Mathevon N. Learning to cope with degraded sounds: female zebra finches can improve their expertise in discriminating between male voices at long distances. J. Exp. Biol. 2014;217:3169–3177. doi: 10.1242/jeb.104463. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Mouterde SC, Theunissen FE, Elie JE, Vignal C, Mathevon N. Acoustic communication and sound degradation: how do the individual signatures of male and female zebra finch calls transmit over distance? PLoS ONE. 2014;9:e102842. doi: 10.1371/journal.pone.0102842. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Miller DB. Long-term recognition of father’s song by female zebra finches. Nature. 1979;280:389–391. [Google Scholar]
- 43.Cynx J, Nottebohm F. Role of gender, season, and familiarity in discrimination of conspecific song by zebra finches (Taeniopygia guttata) Proc. Natl Acad. Sci. USA. 1992;89:1368–1371. doi: 10.1073/pnas.89.4.1368. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Miller DB. The acoustic basis of mate recognition by female Zebra finches (Taeniopygia guttata) Anim. Behav. 1979;27:376–380. [Google Scholar]
- 45.Clayton NS. Song discrimination learning in zebra finches. Anim. Behav. 1988;36:1016–1024. [Google Scholar]
- 46.D’Amelio PB, Klumb M, Adreani MN, Gahr ML, ter Maat A. Individual recognition of opposite sex vocalizations in the zebra finch. Sci. Rep. 2017;7:5579. doi: 10.1038/s41598-017-05982-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Elie JE, et al. Vocal communication at the nest between mates in wild zebra finches: a private vocal duet? Anim. Behav. 2010;80:597–605. [Google Scholar]
- 48.Reers H, Jacot A, Forstmeier W. Do zebra finch parents fail to recognise their own offspring? PLoS ONE. 2011;6:e18466. doi: 10.1371/journal.pone.0018466. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Prat Y, Taub M, Yovel Y. Everyday bat vocalizations contain information about emitter, addressee, context, and behavior. Sci. Rep. 2016;6:39419. doi: 10.1038/srep39419. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Daley M, Goller F. Tracheal length changes during zebra finch song and their possible role in upper vocal tract filtering. J. Neurobiol. 2004;59:319–330. doi: 10.1002/neu.10332. [DOI] [PubMed] [Google Scholar]
- 51.Ohms VR, Snelderwaard PC, ten Cate C, Beckers GJL. Vocal tract articulation in zebra finches. PLoS ONE. 2010;5:e11923. doi: 10.1371/journal.pone.0011923. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Riede T, Schilling N, Goller F. The acoustic effect of vocal tract adjustments in zebra finches. J. Comp. Physiol. A. Neuroethol. Sens. Neural Behav. Physiol. 2013;199:57–69. doi: 10.1007/s00359-012-0768-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Hillenbrand J, Getty LA, Clark MJ, Wheeler K. Acoustic characteristics of American English vowels. J. Acoust. Soc. Am. 1995;97:3099–3111. doi: 10.1121/1.411872. [DOI] [PubMed] [Google Scholar]
- 54.Simpson HB, Vicario DS. Brain pathways for learned and unlearned vocalizations differ in zebra finches. J. Neurosci. 1990;10:1541–1556. doi: 10.1523/JNEUROSCI.10-05-01541.1990. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Sewall KB, Young AM, Wright TF. Social calls provide novel insights into the evolution of vocal learning. Anim. Behav. 2016;120:163–172. doi: 10.1016/j.anbehav.2016.07.031. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Perez EC, et al. Physiological resonance between mates through calls as possible evidence of empathic processes in songbirds. Horm. Behav. 2015;75:130–141. doi: 10.1016/j.yhbeh.2015.09.002. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The custom Python code used to calculate the acoustical features is available as the BioSound class in sound.py found in https://github.com/theunissenlab/soundsig. Tutorials with example data are found in https://github.com/theunissenlab/BioSoundTutorial. The complete library of vocalizations is available at Figshare: 10.6084/m9.figshare.11905533.v1. The behavioral data and behavioral analysis code are available from the corresponding authors upon reasonable request.