Skip to main content
Royal Society Open Science logoLink to Royal Society Open Science
. 2023 Oct 4;10(10):230835. doi: 10.1098/rsos.230835

Evidence for vocal signatures and voice-prints in a wild parrot

Simeon Q Smeele 1,2,3,, Juan Carlos Senar 4, Lucy M Aplin 1,5,6,, Mary Brooke McElreath 1,2,
PMCID: PMC10548090  PMID: 37800160

Abstract

In humans, identity is partly encoded in a voice-print that is carried across multiple vocalizations. Other species also signal vocal identity in calls, such as shown in the contact call of parrots. However, it remains unclear to what extent other call types in parrots are individually distinct, and whether there is an analogous voice-print across calls. Here we test if an individual signature is present in other call types, how stable this signature is, and if parrots exhibit voice-prints across call types. We recorded 5599 vocalizations from 229 individually marked monk parakeets (Myiopsitta monachus) over a 2-year period in Barcelona, Spain. We examined five distinct call types, finding evidence for an individual signature in three. We further show that in the contact call, while birds are individually distinct, the calls are more variable than previously assumed, changing over short time scales (seconds to minutes). Finally, we provide evidence for voice-prints across multiple call types, with a discriminant function being able to predict caller identity across call types. This suggests that monk parakeets may be able to use vocal cues to recognize conspecifics, even across vocalization types and without necessarily needing active vocal signatures of identity.

Keywords: vocal signature, monk parakeet, parrots, Bayesian statistics, individual recognition

1. Introduction

Individual recognition and signalling of individual identity can play an important role in social interactions and decision-making. Examples of how individuals can benefit from individual recognition are wide-ranging, and include helping relatives [1], remembering reliable cooperators [2] and strategically directing aggression [3]. For the individual that is recognized, signalling identity is beneficial if the benefits associated with incurring affiliative behaviour outweigh potential costs associated with misidentification [4]. While it sometimes also pays to hide identity [46], in most cases, the benefits of broadcasting identity probably outweigh the potential costs. In fission–fusion societies, for instance, signalling identity may allow individuals to preferentially reassociate with a subset of the population when confronted with a large number of potential interaction partners [7,8]. Early human societies were fission–fusion based and probably heavily dependent on cooperation between individuals [9]; perhaps not surprisingly, the human face has evolved to allow for maximum individual distinctiveness [10].

Across species, individual identity has been found to be conveyed through multiple potential sensory modalities, including olfactory, acoustic or visual cues. For example, several social wasps display distinctive facial features [11]. However, while visual or olfactory distinctiveness is useful during close interactions, it is probably less effective across longer distances or in low-visibility environments such as tropical forests or turbid waters. Vocal signals are much better suited for these situations, and vocal broadcasting of identity has been found across a wide range of taxonomic groups, ranging from American goldfinches (Spinus tristis) [12] to bottle-nosed dolphins (Tursiops truncatus) [13]. These species often have one call type that is very stereotyped within individuals, with enough structural complexity to allow for many unique variants. For example, bottle-nosed dolphins produce a very stereotyped signature whistle when out of visual contact, where the individual signature is encoded in the frequency modulation pattern, or in other words how the frequency goes up and down [14]. Individuals predominantly produce ‘their’ signature whistle, and the duration combined with the frequency modulation allows for many unique patterns.

While a single vocal signal to broadcast identity is useful, individuals will often produce multiple call types, and could, therefore, benefit from being recognized across these calls. Three potential solutions to the need to be recognized in multiple call types are possible (figure 1). The first is making each call type individually distinct. Such a strategy has been shown in a variety of bird species [1517], bats [18] and some primate species [1921]. However, maintaining multiple signals of identity is cognitively demanding for signallers and receivers to remember; consequently, this strategy is probably constrained to species with either small vocal repertoires or small group sizes [15]. The second solution is to combine a single identity call with the other call types in a sequence [22]. The cognitive demands of this strategy are much lower, and if flexibly deployed, it potentially allows individuals to signal identity in contexts where recognition is beneficial and hide identity in other contexts. However, it increases the complexity and potential cost of vocal production, as all individually distinct vocalizations now involve at least two elements. The third solution is to evolve a recognizable voice-print across call types. This can be achieved via the specific morphology of the vocal production organ, leaving a unique and recognizable cue on all vocalizations that is consistent within individuals across call types but variable across individuals. This last solution is well suited for species that continuously modify the vocalizations they produce. It should be noted that such a voice-print differs from a vocal signature in that it is probably not actively produced, but is a by-product of the vocal tract. To distinguish between these types of vocal signals, throughout this study we use the term ‘individual signature’ to denote actively produced uniqueness within call types and ‘voice-print’ to denote the emergent individual signature resulting from vocal tract morphology.

Figure 1.

Figure 1.

Illustration of how animals can encode individual signature in two call types. Squares are stylized spectrograms of contact and alarm calls. Rows within each hypothesis represent different versions of both call types for each individual. Hypothesis 1: each call type is distinct—the individual ID is encoded in the frequency modulation of the contact call and the pulse duration of the alarm call. Hypothesis 2: a dedicated identity call—individual ID is only encoded in the frequency modulation of the contact call. The alarm call is now a sequence of contact-alarm to encode both individual ID and call function. The alarm call can be highly variable within individuals. Hypothesis 3: a voice-print—there is no individual information encoded in the frequency modulation or the pulse duration, but instead there is a general voice-print (represented by colour) that goes across call types.

The best known example of this third strategy is the voice-print in humans. Humans have a complex communication system with an almost limitless number of sounds that can be produced, rendering it unfeasible to include identity calls in combination with secondary utterances. Yet despite this flexible production, the human vocal tract leaves an individually distinct cue in the timbre of the voice, allowing speakers to be recognized across most utterances [23]. To date, the potential for such a voice-print to occur in other animals has received surprisingly little attention. Thus far, voice-prints have only been shown in the mating calls of red deer stags (Cervus elaphus), where Reby et al. [24] used mel frequency cepstral coefficients (MFCCs) combined with a hidden Markov chain model to find that 63% of roars and barks could be correctly assigned to seven individuals. Notably, this study used relatively few call types and individuals of a fixed repertoire species. To our knowledge there has been no study investigating voice-prints across call types in a non-human vocal learner with a large and flexible vocal repertoire. This is despite the fact that these species would benefit most from such an individual vocal recognition mechanism, since they might modify their contact call and thereby render an individual signature in frequency or duration less clear. Identifying if and what other species exhibit similar voice-prints is an important first step in understanding how vocal learning can evolve without obscuring individual identity information in the vocalizations.

Parrots are open-ended vocal production learners that often exhibit large and flexible vocal repertoires [25,26]. In this group, most research focus has been on contact calls, loud calls often made during group fusion events, or when individuals are isolated. These contact calls are probably socially learned in early stages of development [27,28] and are generally assumed to broadcast identity [29,30]. Some species also appear to actively modify their contact call over periods of weeks to converge with pairs or with flock mates [31,32], and there is even evidence for rapid convergence within vocal exchanges [3335]. Despite this flexibility, some species have a stable individual signature in their contact call, at least within the time period of focus [30,36,37]. Additionally, other species have a stable group-level signature in their contact call that also appears to persist over long periods of time. For example yellow-naped amazons (Amazona auropalliata) have dialects that are virtually unchanged throughout a period of 11 years in some locations [38]. However, it is not known how much of an individual signature exists in call types other than the contact call for adult parrots (but see [39]), whether this is stable over time, or if vocal distinctiveness carries across call types as a voice-print.

In our study we addressed these questions in monk parakeets (Myiopsitta monachus), a communal nesting parrot with a large native and invasive range. Monk parakeets are popular pets with good vocal imitative abilities and, like all parrots, are lifelong vocal learners. Their contact calls have been extensively studied [30,4043], with these studies suggesting that monk parakeet contact calls contain an individual signature [30]. In their invasive range, they also appear to exhibit geographically distinct dialects in contact calls [41,43], although this is much less pronounced in their native range [30]. However, it should be noted that no study has recorded vocalizations from a large set of individually marked monk parakeets, or extended this analysis to other call types. Here, we recorded 229 wild, individually marked monk parakeets in Barcelona, Spain, over a period of two months across two consecutive years, and manually categorized calls into 11 call types. First, for the five call types with enough data, we measured similarity between calls within the same call type and analysed the results with a Bayesian multi-level model to test how much individual signature exists in the most common monk parakeet call types and how stable these signatures are over time. Second, we tested how much individual information exists across call types by training the model on one set of call types and predicting on another set of call types. Based on previous work we predicted high levels of individual signature in contact calls and lower levels in other call types. Additionally, we predicted a stable signature over a month long period with reduction in similarity over years. Finally, if monk parakeets exhibit a voice-print in their vocalizations, we predicted that calls could be assigned to individuals across call types.

2. Methods

2.1. Study system

We studied monk parakeets in Parc de la Ciutadella and surrounding areas in Barcelona, Spain, where they have been reported as an invasive species since the late 1970s [44]. Parc de la Ciutadella, Promenade Passeig de Lluís Companys and Zoo de Barcelona form a continuous habitat of grass and asphalt with multiple tree species in which monk parakeets nest and forage. They build complex stick nests in trees and other structures, often building new nest chambers on top of already existing nest structures [45], creating colonies of birds living in close proximity.

Since May 2002, adults and juveniles have been regularly captured and marked using a walk-in trap on Museu de Ciències Naturals de Barcelona, while fledglings have been marked directly at their nests [46]. Birds are ringed with unique leg-bands and fitted with neck collars with small tags displaying unique combinations of letters and digits. These are similar to small dog tags and can be read from up to 30 m with binoculars. This effort has resulted in over 3000 ringed birds since May 2002, of which 300–400 are recaptured/sighted each year. In November 2021, to increase the number of marked birds in the population for this study, we captured and tagged an additional 59 adults and juveniles at their nests, trapping individuals at night with hand nets. All birds were ringed with special permission EPI 7/2015 (01529/1498/2015) from Direcció General del Medi Natural i Biodiversitat, Generalitat de Catalunya, and with authorization to J.C.S. for animal handling for research purposes from Servei de Protecció de la Fauna, Flora i Animal de Companyia (001501-0402.2009).

2.2. Data collection

Vocalizations were recorded from marked individuals in 2 years between 27 October to 19 November 2020 and 31 October to 30 November 2021 (55 days total) using a Sennheiser K6/ME67 shotgun microphone and Sony PCM D100 recorder from a distance ranging between 1 and 20 m. The IDs and behaviours of focal individuals, the behaviours of close-by individuals and the general contexts of the vocalizations were verbally annotated. Some recordings were also videotaped and IDs were transcribed afterwards.

In addition, we mapped all nests in the recording area using Gaia GPS on several Android cellphones. Errors were manually corrected to less than 10 m. In order to determine nest occupancy, we monitored nests multiple times throughout the day until an individual was observed inside the nest at least three times. Individuals were assigned to a nest entry if they were seen at least once inside one of the nest entrances. If they were sighted at multiple nests, they were assigned to the nest where they were most often sighted. If no birds were observed at a nest, we continued to monitor the nest daily for the duration of the recording period.

2.3. Data processing

All calls with fundamental frequencies clearly distinguishable from background noise and with no overlapping sounds were selected in Raven Lite [47]. Calls were then manually assigned to 11 broad call types based on structural similarity. For five of these we had a large enough sample size to analyse the individual signature. These were: (i) contact call—a frequency modulated call with at least three inflection points, (ii) tja call—a tonal call with a single rising frequency modulation, (iii) trruup call—a combination of amplitude-modulated introduction (similar to alarm calls) with a tonal ending (similar to the tja call), (iv) alarm—an amplitude-modulated call with at least four ‘notes’ and clear harmonics, predominantly used in distress situations, and (v) growl—an amplitude-modulated call with at least four ‘notes’ and no clear harmonics, predominantly used in social interactions (figure 2). Other call types were included for the cross call type analysis (see further down).

Figure 2.

Figure 2.

Example spectrograms of the call types included in the analysis of vocal signature. Settings: window length = 512, overlap=89%, window type = Hanning. Darker colours (red) indicate more energy for that frequency (y-axis) at that time (x-axis).

We used four methods to measure similarity between calls: dynamic time warping (DTW, [48]), spectrographic cross correlation (SPCC, [49]), spectrographic analysis (SPECAN, specified in the electronic supplementary material) and mel frequency cepstral coefficient cross correlation (MF4C, specified in the electronic supplementary material). We present the results of SPCC in the main text, since SPCC could be run on all call types, is the most used method in previous work and other methods gave similar results. The results of all other methods are presented in the electronic supplementary material. SPCC consists of sliding two spectrograms over each other and calculating the sum of the difference between each pixel per sliding window. The distance at maximal overlap between calls is then used as a measure of acoustic distance (see figure 3a for a schematic overview). We implemented our own function for SPCC in R [50] to remove as much background noise as possible (see the electronic supplementary material for details).

Figure 3.

Figure 3.

Workflow and results for spectrographic cross correlation. Black squares with thick red lines are stylized versions of the spectrograms. For real data see figure 2. (a) Schematic overview of the analysis pipeline. (b) Model results for contact calls, tja calls, trruup calls, alarm calls and growls (top to bottom). Blue density plots are the posterior contrast between the similarity of calls from different individuals versus the same individual and same recording. Green density plots are the posterior contrast between the similarity of calls from different individuals versus the same individual but different recordings. Little or no overlap with zero (dashed grey line) indicates a reliable signature of individual or recording. Blue lines are 16 samples from the posterior prediction of acoustic distance throughout time within a recording. Green lines are 16 samples from the posterior prediction of acoustic distance throughout days between a recording.

2.4. Statistical analysis

The first aim of this study was to determine whether call types contained an individual signature. Three of our methods (DTW, SPCC and MF4C) produce similarity matrices rather than single or multiple measures per call. The analysis of such a matrix is challenging, since most conventional models are designed for multivariate datasets. To estimate similarity between calls coming from the same individual compared with calls coming from different individuals, we, therefore, used a Bayesian model that is structurally similar to the social relationships model [51]. The response variables were dyadic acoustic distances, and predictor variables were whether or not the calls came from the same individual, from the same recording, a unique ID for the recording dyad, a unique ID for the individual dyad and a unique ID for both calls. This way we controlled for repeated and unbalanced sampling per individual, per recording and repeated comparisons per call (see the electronic supplementary material for the mathematical model definition).

To visualize similarity between calls coming from the same versus different individuals, we computed the posterior contrast between the predicted acoustic distance between calls from two different individuals and between calls from two different recordings of the same individual. A contrast is the pairwise difference between samples of two distributions. This creates a new posterior distribution that reflects the modelled difference between two categories, in this case same versus different individual. We report the whole posterior density and the fraction of posterior samples that overlap with zero. If the contrast does not overlap zero, or there is only little overlap, it indicates that, given the data and model structure, there is a difference between categories. To visualize similarity between calls from the same recording session, we computed the posterior contrast between calls from two different individuals and compared that with posterior contrasts between calls from the same individual and same recording.

The second aim was to test how stable the individual signatures were across time. We tested this across three scales: within a recording, across days and across years. We only used acoustic distances between calls from the same individual. We then modelled the acoustic distance as a function of time separating the two calls with a Bayesian multi-level model (see the electronic supplementary material for the mathematical model definition). For the first model we included time on the log-scale. For the latter two models we only included acoustic distance between calls coming from different recordings and time was measured as days between recordings and same or different year, respectively (see the electronic supplementary material for the mathematical model definition).

Third, to assess how recognizable individuals were across call types we ran multiple permuted discriminant function analyses (pDFAs) on the MFCCs summary statistics (mean and standard deviation). We chose to write our own function to run pDFA in R [50], so we could choose vocalizations from different recordings for the training and test sets, balance these datasets and compare the resulting scores with scores from a randomized dataset. This function was based on the work done by Mundry & Sommer [52]. To test how reliable pDFAs could score individual identity within a call type, we first trained and tested a pDFA on contact calls. To test how much information was available across broad call type categories we trained a pDFA on amplitude-modulated calls (with clear interruptions in the amplitude—see figure 2 alarm, trruup and growl for examples) and then tested on tonal calls (with uninterrupted tonal components, see figure 2 contact and tja for examples) and vice versa (see figure 4 for a schematic overview). We grouped call types to obtain a large enough sample size for the pDFA and choose these categories to maximize dissimilarity between the two categories. For all pDFAs we report the 89% highest density interval of differences between the trained and randomized score. We also report the overlap with zero. If there is no overlap with zero, or the overlap with zero is very limited, it means the trained pDFA was performing above chance level. To test if the model learned features related to sex or background noise we re-ran the procedure on calls from females from Promenade Passeig de Lluís Companys, which is generally more noisy, and also re-ran the procedure where labels were restricted to be randomized within location (Promenade Passeig de Lluís Companys and Parc de la Ciutadella). Throughout the text we use pDFA to refer to a full set of permuted discriminant function analyses and DFA to refer to a single run of discriminant function analyses.

Figure 4.

Figure 4.

Workflow for permuted linear discriminant analysis (LDA) across call types. For each iteration recordings are split in a training and a testing set. A linear discriminant classifier is trained on mel frequency cepstral coefficients from amplitude-modulated calls from the training set with calls labelled with individual ID. A random LDA, where labels are randomized is also trained. The correct classification percentage is calculated for the tonal calls from the testing set for both trained and random LDA. Black squares with thick red lines are stylized versions of the spectrograms.

All analysis was run in R [50] and scripts are publicly available on GitHub: https://github.com/simeonqs/Evidence_for_vocal_signatures_and_voice-prints_in_a_wild_parrot. All Bayesian models were run using the R package cmdstanr [53], which runs the Stan sampler [54]. Rhat values were monitored to ensure convergence.

3. Results

In total, we recorded 5599 calls across 229 individually marked birds over the 2 years of data collection, 3242 in year 1 and 2357 in year 2. Our manual sorting led to 3203 contact calls, 185 tja calls, 265 trruup calls, 249 alarm calls and 364 growls. We then asked whether the five call types were individually distinctive. As expected from previous studies [30], we found a weak but reliable individual signature for the contact call (contrast mean: 0.06, overlap zero: 0.00, figure 3b). This contrast means that calls from the same individual are 0.06 closer to each other on the normalized scale (0 being completely similar, 1 being completely dissimilar) than calls from two different individuals. The trruup call contained an equally strong individual signature (contrast mean: 0.05, overlap zero: 0.00, figure 3b). The individual signature in alarm calls was relatively weaker (contrast mean: 0.02, overlap zero: 0.02, figure 3b). Finally, for the tja and growls there was no evidence for an individual signature (figure 3b).

Additionally, we found evidence in all call types for short-term temporal variability, with calls from the same recording sounding more similar than calls coming from two different recordings. For all calls other than the growl there was also an increase in acoustic distance with time throughout a recording (figure 3b). In other words, calls coming right after each other were more similar than calls spaced further apart in the recording. For the trruup call, alarm call and growl, acoustic distance also increased with days between recordings. However, at the largest time scale this temporal variability disappeared, with individual signature stable between years and calls not more similar within year than across (figure 5). Our method did not allow us to test which spectral features (e.g. fundamental frequency or duration) changed most over time, since models were based on distance metrics (in other words comparing two whole calls with each other, rather than several metrics for each call).

Figure 5.

Figure 5.

Posterior distributions for the Bayesian multi-level model with contrasts between recordings from the same versus different years. Greater values mean more within year similarity. Little to no overlap with zero (grey dashed line) indicates reliable differences. Models were run separately for the main call types (columns) and methods (rows; DTW—dynamic time warping, SPCC—spectrographic cross correlation, MF4C—mel frequency cepstral coefficient cross correlation). N represents the number of calls included in the model.

We then used multiple pDFAs on the MFCCs summary statistics (mean and standard deviation) to test whether DFAs trained on a subset of calls were able to successfully predict caller identity when presented with new calls. First, and as expected, results from the pDFAs further added to the evidence that contact calls contained an individual signature, with the trained DFA on average 36% more successful in predicting identity than a randomized DFA (table 1). We also found evidence that calls contain general individualized features that were maintained across call types. A pDFA with amplitude-modulated calls as training data and tonal calls as testing data or vice versa achieved a score of 16% and 10% more successful, respectively, than random (table 1). The trained DFA outperformed the random DFA in all iterations of the model.

Table 1.

Table of all the results of permuted discriminant function analysis. The column ‘pDFA type’ contains information about how the pDFA was run: ‘combined’ included all recordings, ‘subset’ included only females in Promenade Passeig de Lluís Companys and ‘permuted’ was run where randomization was done within location. The column ‘call type’ contains information about which call types were included. For example, ‘tonal-growly’ means the model was trained on tonal calls and tested on amplitude-modulated calls and ‘contact’ means it was trained and tested on contact calls. The column ‘mean difference’ contains the mean difference between the trained and random DFAs. The column ‘lower bound’ contains the lower bound of the 89% highest density interval. The column ‘upper bound’ contains the upper bound of the 89% highest density interval. The column ‘overlap zero’ contains the fraction of iterations that were less than zero. The column ‘sample size’ contains the number of individuals included.

pDFA type call type mean difference lower bound upper bound overlap zero sample size
combined contact 0.36 0.27 0.45 0.00 16
combined tonal-growly 0.10 0.04 0.15 0.00 19
combined growly-tonal 0.16 0.05 0.26 0.00 12
combined all 0.13 0.09 0.17 0.00 52
subset contact 0.20 0.04 0.37 0.01 9
subset tonal-growly 0.14 0.03 0.27 0.02 11
subset growly-tonal 0.02 −0.20 0.20 0.30 5
subset all 0.17 0.06 0.27 0.00 17
permuted contact 0.25 0.12 0.39 0.00 16
permuted tonal-growly 0.06 −0.02 0.13 0.13 20
permuted growly-tonal 0.16 −0.03 0.31 0.07 8
permuted all 0.10 0.04 0.15 0.00 40

While we did our best to select calls with no overlapping features or background noise, it is possible that our analysis was still detecting features that were more likely to occur in calls of particular individuals. Alternatively, individuals might have called in a characteristic way in particular locations, creating a false signal in the data. To try and remove these potential biases, we re-ran our analysis within females in Promenade Passeig de Lluís Companys. In this case, only the pDFA trained on tonal calls and tested on amplitude-modulated calls performed better than random (table 1). As this might be an effect of the greatly reduced dataset, we then re-ran our analysis with the full dataset, but restricting randomization to only within location. In this case, the trained pDFA performed much better than chance, but overlap with zero increased to 13% and 7% for tonal to amplitude modulated and vice versa, respectively (table 1).

4. Discussion

Many animals are likely to benefit from individual recognition. In many species of birds, this is thought to most likely occur through individually distinct vocalizations. Yet how this is achieved in species with open-ended vocal production learning, and in parrots in particular, has been under-studied (but see [36]). By recording vocalizations in individually marked wild monk parakeets across one month and over 2 years, we reveal multiple insights into the vocal production of this parrot. First, we show that multiple call types given by monk parakeets contain a weak individual signature, but that this signature is relatively stable over time, persisting within and between years (see figures 3 and 5). Second, we show that calls are not stereotyped, but are highly variable over short time scales (seconds to minutes, figure 3b); within the same recording calls are generally more similar than calls from different recordings, and even within a recording calls close in time are more similar. Third, we tested if individual identity was distinguishable across call types. We used MFCCs, training a pDFA on one set of call types and testing on another set of call types, doing so across recordings to make sure background noise could not be ‘learned’ by the model. Our results suggest monk parakeets have a voice-print that exists across structurally different call types, although the strength of evidence varied across call types and analyses. To our knowledge this is the first evidence for the detection of voice-prints in a non-human vocal learner.

The ability to recognize individuals from their vocalizations should be highly advantageous in species with social systems like monk parakeets, where individuals may encounter many potential association partners during fission–fusion foraging dynamics. Previous studies have demonstrated individual signatures in the contact calls of monk parakeets [30], as well as in contact calls from other parrot species [36,37,55,56]. However, like many parrots, monk parakeets have a large and variable vocal repertoire, and individuals might benefit from individual recognition in multiple call types. For example, individual-level variation might be important for alarm calls which are generally used when individuals are agitated by each other or by external threats [57]. The trruup call is also often given in situation where flocks fission (S. Q. Smeele 2021, personal observation), in which case it may be important to know which conspecifics are about to fly away. In support of this prediction, we found that three of five tested call types (contact, alarm and trruup) in monk parakeets contained some evidence for an individual signature. While we found no evidence for individual distinctiveness in the growl or tja calls, it might be these calls do not require individual signatures: the tja call is often used in combination with other calls, and the growl is often used in close-range social interactions where identity might have already been established. Alternatively, it could be that these calls cannot support individual signatures: the tja is too short to allow for many unique variants, and the growl has no tonal structure in which identity information could potentially be encoded. This is in line with results found for chimpanzees (Pan troglodytes), where the short range pant grunts contained less individual variation than other calls [58].

We proposed three hypotheses for how a vocal recognition system could be achieved in monk parakeets (figure 1). First, individuals could use individual signatures in several call types, unique to each call. Second, individuals could use a single unique signal that is added to the vocal sequences of multiple call types. Third, each individual could have a set of vocal features that are shared across all their calls, i.e. a voice-print. While our results provide evidence for an individual signature present in some call types (supporting the first hypothesis), calls were also highly variable. Overall, our results that a model trained on one individual signature in one call type could help predict individual identity in another call type best supports the third hypothesis, that monk parakeets possess a voice-print that exists across call types, with a shared set of structural features that make them individually recognizable. Overall, we found most support for this third hypothesis. The individual signatures in the contact call were reliable, but decayed rapidly. However, a weak individual signature remained even across years. The fact that we did find a voice-print even across structurally very different tonal and amplitude-modulated calls strongly suggests that this could be the dominant mode of recognition. Leroux et al. [59] put forward a method to detect voice-prints across sequences of calls, something that might improve individual recognition in follow-up studies that also include such sequences.

It should be noted that we used MFCCs and summarized these using the mean and standard deviation of each cepstral. There are two potential issues with this approach. First, the mel frequency range was initially designed to represent how humans perceive sound [60] and it can be argued that this method is not designed to detect voice-print in non-human vocalizations. However, we believe that it is suitable here, because the orange-fronted conure (Eupsittula canicularis), a slightly smaller parrot, has been shown to have a comparable hearing sensitivity curve to humans [61] and monk parakeets have their fundamental frequency between 1 and 2 kHz, which is higher than the human voice, but still within the band where the mel frequency filters have an effect [40]. While individuals are probably able to detect more detailed information compared with our summary statistics, this can only currently be disentangled experimentally. For instance, future work could use play-backs to establish if and how well monk parakeets and other parrot species are able to recognize individuals across call types. Under this paradigm, and similar to Charrier et al. [62], one could potentially modify calls during play-back to determine which spectral and temporal features are needed for individual recognition. Second, MFCCs can be sensitive to background noise. To deal with this, we ran several models to test how robust our results were when permuting the DFAs within location. We found that although performance decreased, there was still a clear trend for the trained DFA to outperform the random DFA. The drop in performance is probably a result of reduced sample size, and further studies are, therefore, needed to validate these results with a larger sample size.

It is also important to note that we cannot exclude the possibility that each call type also contains an individual signature in addition to the potential voice-print. However, if parrots can learn to recognize individuals based on a voice-print shared across calls, such a generalized mechanism relaxes the pressure to produce structural components in each call. This allows calls to include other signatures (e.g. group identity) and reduces memory burden on the receiver significantly. There is also a good reason to expect voice-prints to be present in parrots. Unlike songbirds, that produce their vocalization using two relatively independent syringeal sound sources, parrots have only one sound source and modulate their vocalizations using trachea, tongue and beak. This is very similar to how humans produce the sounds that make up words [6370]. This modulation or filtering by the vocal tract allows for many more individual-specific features to arise and make a voice-print more recognizable. Finally, a distinct and recognizable voice-print could be a particularly useful strategy used to manage individual recognition for species like parrots that are open-ended vocal learners living in complex but cohesive social groups.

Indeed, along with these main results, we found a high degree of variability within calls, with calls spaced 10 min apart much less similar than calls spaced a second apart. It is unlikely that variation in background noise played a role in producing this result, since dynamic time warping performed on manually cleaned fundamental frequency traces obtained similar results (see the electronic supplementary material). A more plausible explanation is that individuals are not capable of reproducing exactly the same call after too much time has elapsed. It is also possible that monk parakeets modify their call based on the context, audience or emotional state. For example, some variants might be used in a foraging context where a partner is present while others are given in isolation. A third possibility is that monk parakeets actively modify their contact call to match other individuals in their group, similar to the rapid convergence found in orange-fronted conures [34]. If this is the case, we would expect a sequence of calls to vary depending on whom an individual is directing their call towards and the size of the audience. This would also suggest that individuals in larger groups should exhibit more variable calls. Both of these scenarios remain to be studied in more depth. However, the presence of voice-prints may help explain how individuals can have such variable calls. If individuals modify the tonal structure of their contact calls in call response interactions, the individual signature in those calls will degrade over seconds within a recording. The voice-print would, however, be much more stable, given it is generated by the morphology of the vocal apparatus, and it would still provide the conspecific with reliable features to recognize the vocalizing individual.

The fact that individuals are so variable in their calls raises a methodological problem for dialect studies on unmarked populations. When recording in the wild, individuals can generally only be monitored for short periods of time. For example, in our study it was rarely possible to record individuals for more than 3–5 min. In this short period individuals are likely to exhibit a consistent individual signature, but this signature was less consistent across recordings. A common technique to exclude repeated sampling of individuals across recordings is to look for highly similar calls and exclude these [30,41]. However, this assumes one can reliably estimate how similar a call needs to be in order to classify it as the same individual. We show that this cannot be reliably estimated from short-term recordings. Moreover, we show that determining which calls come from the same individual in a large sample is not realistic, given the amount of individual variability in contact calls. Instead, we suggest estimating the probability of recording the same individual multiple times and using a sensitivity analysis to test if the detected dialect signal is likely to be a true signal, or if it could have been caused by pseudoreplication (e.g. [43]).

5. Conclusion and outlook

Despite decades of research, the ability of parrots to identify each other based on vocalizations is still not well understood. Some species have clear group signatures and dialects [29], while others appear to have much more pronounced individual signature in their contact calls [30,36,37,41]. This study provides the first evidence for an individual voice-print across multiple call types in parrots. Additionally, it demonstrates significant individual variability in the contact call over recordings, but with sustained stability over time. Finally, our findings suggest that the contact call is not unique in its ability to broadcast caller identity in parrots. Instead it appears that parrots may have potentially evolved the capacity for individual recognition across multiple calls types [71]. While our study provides evidence for detectable voice-prints in monk parakeets, further investigation is needed to establish whether parrots actively use voice-prints to recognize conspecifics. More generally, it would now be exciting to test if voice-prints are present in other species as well, and, if these voice-prints are used for recognition, to further explore the dynamics driving the evolution of voice-prints, including whether their presence is predicted by lifelong vocal learning or complex social interactions.

Acknowledgements

We would like to thank Zoo Barcelona and Josep M. Alonso Farré for granting us access to the zoo grounds and showing us around. We would like to thank Andrés Manzanilla, Gustavo Alarcón-Nieto, Mireia Fuertes Clavero, Alba Ortega-Segalerva and José G. Carrillo-Ortiz for all their effort during fieldwork. Finally, we would like to thank Francesca S. E. Dawson Pell, Daniel Redhead, Jack W. Bradbury, Susannah Buhrman-Deever, Grace Smith-Vidaurre, Timothy F. Wright, Roger Mundry, Mirjam Knörnschild and Elizabeth Hobson for valuable advice during the early stages of this project.

Ethics

All monk parakeets were ringed and blood samples taken with special permission EPI 7/2015 (01529/1498/ 2015) from Direcció General del Medi Natural i Biodiversitat, Generalitat de Catalunya, following Catalan regional ethical guidelines for the handling of birds. J.C.S. received authorization (001501-0402.2009) for animal handling for research purposes from Servei de Protecció de la Fauna, Flora i Animal de Companyia, according to Decree 214/1997/30.07.

Data accessibility

Code and small data files are publicly available on GitHub: https://github.com/simeonqs/Evidence_for_vocal_signatures_and_voice-prints_in_a_wild_parrot. The full repository including large data files is publicly available on Edmond [72].

The supplemental methods and results are provided in the electronic supplementary material [73].

Declaration of AI use

We have not used AI-assisted technologies in creating this article.

Authors' contributions

S.Q.S.: conceptualization, data curation, formal analysis, funding acquisition, investigation, methodology, project administration, software, validation, visualization, writing—original draft, writing—review and editing; J.C.S.: data curation, funding acquisition, methodology, project administration, resources, writing—review and editing; L.M.A.: conceptualization, funding acquisition, resources, supervision, writing—review and editing; M.B.M.: conceptualization, funding acquisition, resources, supervision, writing—review and editing.

All authors gave final approval for publication and agreed to be held accountable for the work performed therein.

Conflict of interest declaration

We declare we have no competing interests.

Funding

Open access funding provided by the Max Planck Society.

S.Q.S. received funding from the International Max Planck Research School for Quantitative Behaviour, Ecology and Evolution. L.M.A. was funded by a Max Planck Research Group Leader Fellowship, and is currently supported by the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number MB22.00056. Research funding was provided to S.Q.S. and L.M.A. by the Centre for the Advanced Study of Collective Behaviour (CASCB), funded by the Deutsche Forschungsgemeinschaft (DFG) under Germany’s Excellence Strategy (EXC 2117-422037984). J.C.S. was supported by a research project from the Ministry of Science and Innovation (CGL-2020 PID2020-114907GB-C21).

References

  • 1.Russell A, Hatchwell B. 2001. Experimental evidence for kin-biased helping in a cooperatively breeding vertebrate. Proc. R. Soc. Lond. B 268, 2169-2174. ( 10.1098/rspb.2001.1790) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Boesch C. 1994. Cooperative hunting in wild chimpanzees. Anim. Behav. 48, 653-667. ( 10.1006/anbe.1994.1285) [DOI] [Google Scholar]
  • 3.Hobson E, Mønster D, DeDeo S. 2021. Aggression heuristics underlie animal dominance hierarchies and provide evidence of group-level social information. Proc. Natl Acad. Sci. USA 118, e2022912118. ( 10.1073/pnas.2022912118) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Johnstone R. 1997. Recognition and the evolution of distinctive signatures: when does it pay to reveal identity? Proc. R. Soc. Lond. B 264, 1547-1553. ( 10.1098/rspb.1997.0215) [DOI] [Google Scholar]
  • 5.Tibbetts E, Dale J. 2007. Individual recognition: it is good to be different. Trends Ecol. Evol. 22, 529-537. ( 10.1016/j.tree.2007.09.001) [DOI] [PubMed] [Google Scholar]
  • 6.Carlson N, Kelly E, Couzin I. 2020. Individual vocal recognition across taxa: a review of the literature and a look into the future. Phil. Trans. R. Soc. B 375, 20190479. ( 10.1098/rstb.2019.0479) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Kummer H. 2017. Primate societies: group techniques of ecological adaptation. New York, NY: Routledge. [Google Scholar]
  • 8.Aureli F, Schaffner C, Schino G. 2022. Variation in communicative complexity in relation to social structure and organization in non-human primates. Phil. Trans. R. Soc. B 377, 20210306. ( 10.1098/rstb.2021.0306) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Migliano A, Vinicius L. 2022. The origins of human cumulative culture: from the foraging niche to collective intelligence. Phil. Trans. R. Soc. B 377, 20200317. ( 10.1098/rstb.2020.0317) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Sheehan M, Nachman M. 2014. Morphological and population genomic evidence that human faces have evolved to signal individual identity. Nat. Commun. 5, 1-10. ( 10.1038/ncomms5800) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Tibbetts E. 2004. Complex social behaviour can select for variability in visual features: a case study in Polistes wasps. Proc. R. Soc. Lond. B 271, 1955-1960. ( 10.1098/rspb.2004.2784) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Mundinger P. 1970. Vocal imitation and individual recognition of finch calls. Science 168, 480-482. ( 10.1126/science.168.3930.480) [DOI] [PubMed] [Google Scholar]
  • 13.Janik V, Sayigh L. 2013. Communication in bottlenose dolphins: 50 years of signature whistle research. J. Comp. Physiol. A 199, 479-489. ( 10.1007/s00359-013-0817-7) [DOI] [PubMed] [Google Scholar]
  • 14.Janik V, Sayigh L, Wells R. 2006. Signature whistle shape conveys identity information to bottlenose dolphins. Proc. Natl Acad. Sci. USA 103, 8293-8297. ( 10.1073/pnas.0509918103) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Elie J, Theunissen F. 2018. Zebra finches identify individuals using vocal signatures unique to each call type. Nat. Commun. 9, 1-11. ( 10.1038/s41467-018-06394-9) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Charrier I, Jouventin P, Mathevon N, Aubin T. 2001. Individual identity coding depends on call type in the South Polar skua Catharacta maccormicki. Polar Biol. 24, 378-382. ( 10.1007/s003000100231) [DOI] [Google Scholar]
  • 17.Mäkelin S, Wahlberg M, Osiecka A, Hermans C, Balsby T. 2021. Vocal behaviour of the great cormorant Phalacrocorax carbo sinensis during the breeding season. Bird Study 68, 1-9. [Google Scholar]
  • 18.Prat Y, Taub M, Yovel Y. 2016. Everyday bat vocalizations contain information about emitter, addressee, context, and behavior. Sci. Rep. 6, 1-10. ( 10.1038/srep39419) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Keenan S, Mathevon N, Stevens J, Nicolè F, Zuberbühler K, Guéry J, Levréro F. 2020. The reliability of individual vocal signature varies across the bonobo’s graded repertoire. Anim. Behav. 169, 9-21. ( 10.1016/j.anbehav.2020.08.024) [DOI] [Google Scholar]
  • 20.Bouchet H, Blois-Heulin C, Pellier A, Zuberbühler K, Lemasson A. 2012. Acoustic variability and individual distinctiveness in the vocal repertoire of red-capped mangabeys (Cercocebus torquatus). J. Comp. Psychol. 126, 45. ( 10.1037/a0025018) [DOI] [PubMed] [Google Scholar]
  • 21.Salmi R, Hammerschmidt K, Doran-Sheehy D. 2014. Individual distinctiveness in call types of wild western female gorillas. PLoS ONE 9, e101940. ( 10.1371/journal.pone.0101940) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Rauber R, Kranstauber B, Manser M. 2020. Call order within vocal sequences of meerkats contains temporary contextual and individual information. BMC Biol. 18, 1-11. ( 10.1186/s12915-020-00847-8) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Mathias S, von Kriegstein K. 2014. How do we recognise who is speaking. Front. Biosci. 6, 92-109. ( 10.2741/S417) [DOI] [PubMed] [Google Scholar]
  • 24.Reby D, André-Obrecht R, Galinier A, Farinas J, Cargnelutti B. 2006. Cepstral coefficients and hidden Markov models reveal idiosyncratic voice characteristics in red deer (Cervus elaphus) stags. J. Acoust. Soc. Am. 120, 4080-4089. ( 10.1121/1.2358006) [DOI] [PubMed] [Google Scholar]
  • 25.Bradbury J, Balsby T. 2016. The functions of vocal learning in parrots. Behav. Ecol. Sociobiol. 70, 293-312. ( 10.1007/s00265-016-2068-4) [DOI] [Google Scholar]
  • 26.Wright T, Dahlin C. 2018. Vocal dialects in parrots: patterns and processes of cultural evolution. Emu 118, 50-66. ( 10.1080/01584197.2017.1379356) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Berg K, Delgado S, Cortopassi K, Beissinger S, Bradbury J. 2012. Vertical transmission of learned signatures in a wild parrot. Proc. R. Soc. B 279, 585-591. ( 10.1098/rspb.2011.0932) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Teixeira D, Hill R, Barth M, Maron M, van Rensburg B. 2021. Vocal signals of ontogeny and fledging in nestling black-cockatoos: implications for monitoring. Bioacoustics 31, 1-18. [Google Scholar]
  • 29.Wright T. 1996. Regional dialects in the contact call of a parrot. Proc. R. Soc. Lond. B 263, 867-872. ( 10.1098/rspb.1996.0128) [DOI] [Google Scholar]
  • 30.Smith-Vidaurre G, Araya-Salas M, Wright T. 2020. Individual signatures outweigh social group identity in contact calls of a communally nesting parrot. Behav. Ecol. 31, 448-458. ( 10.1093/beheco/arz202) [DOI] [Google Scholar]
  • 31.Dahlin C, Young A, Cordier B, Mundry R, Wright T. 2014. A test of multiple hypotheses for the function of call sharing in female budgerigars, Melopsittacus undulatus. Behav. Ecol. Sociobiol. 68, 145-161. ( 10.1007/s00265-013-1631-5) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Scarl J, Bradbury J. 2009. Rapid vocal convergence in an Australian cockatoo, the galah Eolophus roseicapillus. Anim. Behav. 77, 1019-1026. ( 10.1016/j.anbehav.2008.11.024) [DOI] [Google Scholar]
  • 33.Wright T, Hara E, Young A, Araya Salas M, Dahlin C, Whitney O, Lucero E, Smith Vidaurre G. 2015. Extreme vocal plasticity in adult budgerigars: analytical challenges, social significance, and underlying neurogenetic mechanisms. J. Acoust. Soc. Am. 138, 1880-1880. ( 10.1121/1.4933899) [DOI] [Google Scholar]
  • 34.Balsby T, Bradbury J. 2009. Vocal matching by orange-fronted conures (Aratinga canicularis). Behav. Processes 82, 133-139. ( 10.1016/j.beproc.2009.05.005) [DOI] [PubMed] [Google Scholar]
  • 35.Balsby T, Momberg J, Dabelsteen T. 2012. Vocal imitation in parrots allows addressing of specific individuals in a dynamic communication network. PLoS ONE 7, e49747. ( 10.1371/journal.pone.0049747) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Thomsen H, Balsby T, Dabelsteen T. 2013. Individual variation in the contact calls of the monomorphic peach-fronted conure, Aratinga aurea, and its potential role in communication. Bioacoustics 22, 215-227. ( 10.1080/09524622.2013.779560) [DOI] [Google Scholar]
  • 37.Berg K, Delgado S, Okawa R, Beissinger S, Bradbury J. 2011. Contact calls are used for individual mate recognition in free-ranging green-rumped parrotlets, Forpus passerinus. Anim. Behav. 81, 241-248. ( 10.1016/j.anbehav.2010.10.012) [DOI] [Google Scholar]
  • 38.Wright T, Dahlin C, Salinas-Melgoza A. 2008. Stability and change in vocal dialects of the yellow-naped amazon. Anim. Behav. 76, 1017-1027. ( 10.1016/j.anbehav.2008.03.025) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Wein A, Schwing R, Yanagida T, Huber L. 2021. Vocal development in nestling kea parrots (Nestor notabilis). Bioacoustics 30, 142-162. ( 10.1080/09524622.2019.1705184) [DOI] [Google Scholar]
  • 40.Martella M, Bucher E. 1990. Vocalizations of the monk parakeet. Bird Behav. 8, 101-110. ( 10.3727/015613890791784290) [DOI] [Google Scholar]
  • 41.Buhrman-Deever S, Rappaport A, Bradbury J. 2007. Geographic variation in contact calls of feral North American populations of the monk parakeet. Condor 109, 389-398. ( 10.1093/condor/109.2.389) [DOI] [Google Scholar]
  • 42.Smith-Vidaurre G, Perez-Marrufo V, Wright T. 2021. Individual vocal signatures show reduced complexity following invasion. Anim. Behav. 179, 15-39. ( 10.1016/j.anbehav.2021.06.020) [DOI] [Google Scholar]
  • 43.Smeele S, Tyndel S, Aplin L, McElreath M. 2022. Multi-level Bayesian analysis of monk parakeet contact calls shows dialects between European cities. bioRxiv. ( 10.1101/2022.10.12.511863). [DOI] [PMC free article] [PubMed]
  • 44.Batllori X, Nos R. 1985. Presencia de la cotorrita gris (Myiopsitta monachus) y de la cotorrita de collar (Psittacula krameri) en el área metropolitana de Barcelona. Miscel· lània Zoològica 9, 407–411.
  • 45.Eberhard J. 1998. Breeding biology of the monk parakeet. Wilson Bull. 110, 463-473. [Google Scholar]
  • 46.Senar J, Carrillo-Ortiz J, Arroyo L. 2012. Numbered neck collars for long-distance identification of parakeets. J. Field Ornithol. 83, 180-185. [Google Scholar]
  • 47.Cornell Lab of Ornithology NY. 2016 Raven Lite: Interactive Sound Analysis Software.
  • 48.Giorgino T. 2009. Computing and visualizing dynamic time warping alignments in R: the dtw package. J. Stat. Softw. 31, 1-24. ( 10.18637/jss.v031.i07) [DOI] [Google Scholar]
  • 49.Clark C, Marler P, Beeman K. 1987. Quantitative analysis of animal vocal phonology: an application to swamp sparrow song. Ethology 76, 101-115. ( 10.1111/j.1439-0310.1987.tb00676.x) [DOI] [Google Scholar]
  • 50.R Core Team. 2021 R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing.
  • 51.Kenny D, La Voie L. 1984. The social relations model. Adv. Exp. Soc. Psychol. 18, 141-182. ( 10.1016/S0065-2601(08)60144-6) [DOI] [Google Scholar]
  • 52.Mundry R, Sommer C. 2007. Discriminant function analysis with nonindependent data: consequences and an alternative. Anim. Behav. 74, 965-976. ( 10.1016/j.anbehav.2006.12.028) [DOI] [Google Scholar]
  • 53.Gabry J, Češnovar R. 2022. cmdstanr: R Interface to ’CmdStan’. https://mc-stan.org/cmdstanr/, https://discourse.mc-stan.org.
  • 54.Gelman A, Lee D, Guo J. 2015. Stan: a probabilistic programming language for Bayesian inference and optimization. J. Educ. Behav. Stat. 40, 530-543. ( 10.3102/1076998615606113) [DOI] [Google Scholar]
  • 55.Balsby T, Adams D. 2011. Vocal similarity and familiarity determine response to potential flockmates in orange-fronted conures (Psittacidae). Anim. Behav. 81, 983-991. ( 10.1016/j.anbehav.2011.01.034) [DOI] [Google Scholar]
  • 56.Farabaugh S, Linzenbold A, Dooling R. 1994. Vocal plasticity in budgerigars (Melopsittacus undulatus): evidence for social factors in the learning of contact calls. J. Comp. Psychol. 108, 81. ( 10.1037/0735-7036.108.1.81) [DOI] [PubMed] [Google Scholar]
  • 57.Blumstein D, Munos O. 2005. Individual, age and sex-specific information is contained in yellow-bellied marmot alarm calls. Anim. Behav. 69, 353-361. ( 10.1016/j.anbehav.2004.10.001) [DOI] [Google Scholar]
  • 58.Mitani J, Gros-Louis J, Macedonia J. 1996. Selection for acoustic individuality within the vocal repertoire of wild chimpanzees. Int. J. Primatol. 17, 569-583. ( 10.1007/BF02735192) [DOI] [Google Scholar]
  • 59.Leroux M, Al-Khudhairy OG, Perony N, Townsend SW. 2005. Chimpanzee voice prints? Insights from transfer learning experiments from human voices. arXiv. (http://arxiv.org/abs/2112.08165)
  • 60.Chakraborty K, Talele A, Upadhya S. 2014. Voice recognition using MFCC algorithm. Int. J. Innov. Res. Adv. Eng. 1, 158-161. [Google Scholar]
  • 61.Wright T, Cortopassi K, Bradbury J, Dooling R. 2003. Hearing and vocalizations in the orange-fronted conure (Aratinga canicularis). J. Comp. Psychol. 117, 87. ( 10.1037/0735-7036.117.1.87) [DOI] [PubMed] [Google Scholar]
  • 62.Charrier I, Mathevon N, Jouventin P. 2002. How does a fur seal mother recognize the voice of her pup? An experimental study of Arctocephalus tropicalis. J. Exp. Biol. 205, 603-612. ( 10.1242/jeb.205.5.603) [DOI] [PubMed] [Google Scholar]
  • 63.Nottebohm F. 1976. Phonation in the orange-winged Amazon parrot, Amazona amazonica. J. Comp. Physiol. 108, 157-170. ( 10.1007/BF02169046) [DOI] [Google Scholar]
  • 64.Ohms V, Beckers G, Suthers R. 2012. Vocal tract articulation revisited: the case of the monk parakeet. J. Exp. Biol. 215, 85-92. ( 10.1242/jeb.064717) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Larsen O, Goller F. 2002. Direct observation of syringeal muscle function in songbirds and a parrot. J. Exp. Biol. 205, 25-35. ( 10.1242/jeb.205.1.25) [DOI] [PubMed] [Google Scholar]
  • 66.Beckers G, Nelson B, Suthers R. 2004. Vocal-tract filtering by lingual articulation in a parrot. Curr. Biol. 14, 1592-1597. ( 10.1016/j.cub.2004.08.057) [DOI] [PubMed] [Google Scholar]
  • 67.Patterson D, Pepperberg I. 1994. A comparative study of human and parrot phonation: acoustic and articulatory correlates of vowels. J. Acoust. Soc. Am. 96, 634-648. ( 10.1121/1.410303) [DOI] [PubMed] [Google Scholar]
  • 68.Warren D, Patterson D, Pepperberg I. 1996. Mechanisms of American English vowel production in a grey parrot (Psittacus erithacus). Auk 113, 41-58. ( 10.2307/4088934) [DOI] [Google Scholar]
  • 69.Bottoni L, Masin S, Lenti-Boero D. 2009. Vowel-like sound structure in an African grey parrot (Psittacus erithacus) vocal production. Open Behav. Sci. J. 3, 1-16. ( 10.2174/1874230000903010001) [DOI] [Google Scholar]
  • 70.Brittan-Powell E, Dooling R, Larsen O, Heaton J. 1997. Mechanisms of vocal production in budgerigars (Melopsittacus undulatus). J. Acoust. Soc. Am. 101, 578-589. ( 10.1121/1.418121) [DOI] [PubMed] [Google Scholar]
  • 71.Lavan N, Burton A, Scott S, McGettigan C. 2019. Flexible voices: identity perception from variable vocal signals. Psychon. Bull. Rev. 26, 90-102. ( 10.3758/s13423-018-1497-7) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Smeele SQ. 2023. Data from: Evidence for vocal signatures and voice-prints in a wild parrot. Edmond. ( 10.17617/3.RUIM5I) [DOI] [PMC free article] [PubMed]
  • 73.Smeele SQ, Carlos Senar J, Aplin LM, Brooke McElreath M. 2023. Evidence for vocal signatures and voice-prints in a wild parrot. Figshare. ( 10.6084/m9.figshare.c.6837576) [DOI] [PMC free article] [PubMed]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Citations

  1. Smeele SQ. 2023. Data from: Evidence for vocal signatures and voice-prints in a wild parrot. Edmond. ( 10.17617/3.RUIM5I) [DOI] [PMC free article] [PubMed]
  2. Smeele SQ, Carlos Senar J, Aplin LM, Brooke McElreath M. 2023. Evidence for vocal signatures and voice-prints in a wild parrot. Figshare. ( 10.6084/m9.figshare.c.6837576) [DOI] [PMC free article] [PubMed]

Data Availability Statement

Code and small data files are publicly available on GitHub: https://github.com/simeonqs/Evidence_for_vocal_signatures_and_voice-prints_in_a_wild_parrot. The full repository including large data files is publicly available on Edmond [72].

The supplemental methods and results are provided in the electronic supplementary material [73].


Articles from Royal Society Open Science are provided here courtesy of The Royal Society

RESOURCES