Abstract
Vocal individuality is essential for social discrimination but has been poorly studied in animals that produce communal signals (duets or choruses). Song overlapping and temporal coordination make the assessment of individuality in communal signals more complex. In addition, selection may favor the accurate identification of pairs over individuals by receivers in year-round territorial species with duetting and long-term pair bonding. Here, we studied pair and individual vocal signatures in the polyphonal duets of rufous horneros Furnarius rufus, a Neotropical bird known for its long-term pair bonds. Hornero partners engage in duets to deter territorial intruders and protect their partnership year-round and can discern duets from neighbors versus strangers. Using a dataset of 471 duets from 43 pairs in 2 populations, we measured fine-scale acoustic features across different duet levels (e.g., complete duets to non-overlapping syllable parts) and analysis levels (pair or individual). Permuted linear discriminant function analyses classified pairs and individuals more accurately than expected by chance (means: 45% and 47% vs. 4 and 2%). Pair identity explained more variance in the multivariate acoustic features of duets than individual or population identities. The initial frequency of the duet showed strong potential for encoding pair identity. The acoustic traits contributing most to individual vocal signatures varied between sexes, which might facilitate the simultaneous assessment of duetters’ identities by receivers. Our study indicates that vocal individuality may exist even in species with intricate and innate communal signals and elucidates the mechanisms employed by horneros in their social discrimination ability.
Keywords: coordination, duetting, social discrimination, suboscine, vocal individuality, vocal signature
In animals relying on acoustic communication, selection often favors the evolution of sex (Volodin et al. 2015; Odom et al. 2021) and individual coding (Lambrechts and Dhondt 1995; McGregor et al. 1997; Brecht and Nieder 2020). There are often clear fitness advantages for conspecifics in identifying mates (Bee 2008), parents (Charrier et al. 2003), offspring (Sibiryakova et al. 2015), relatives (Akçay et al. 2013), group members (Radford 2005; Blackburn et al. 2023), and rivals (Lambrechts and Dhondt 1995). Acoustic coding of individual identity is a prevalent pattern in birds and mammals, as shown by studies evaluating the acoustic signatures of senders and receivers’ ability to discriminate others (Briefer et al. 2008; Boeckle and Bugnyar 2012; Xia et al. 2012; Brecht and Nieder 2020; Carlson et al. 2020). Conversely, vocal distinctiveness may also evolve as a byproduct of the ontogeny of vocal tracts and vocalizations (Suthers 1994; McGregor et al. 1997; Clink et al. 2020). In birds, individual distinctiveness in acoustic signals is common, spanning from simple innate calls of phylogenetically basal species (Guggenberger et al. 2022) to complex, learned songs of songbirds with rich repertoire (Lehongre et al. 2008; Průchová et al. 2017; Chen et al. 2020).
Although considerable research has focused on vocal individuality in birds and mammals, these studies center on species producing individual “solo” vocalizations (Xia et al. 2012; Gémard et al. 2019; Osiejuk et al. 2019; Carlson et al. 2020; Smith-Vidaurre et al. 2021). Increasing attention has been directed to the investigation of how communal signals or signals arising from vocal interactions convey information about the sender’s identity (Budde 2001; Baker 2004; Klenova et al. 2009a, 2020; Bragina and Beme 2010; van den Heuvel et al. 2013; Feng et al. 2014; Villain et al. 2017; Clink et al. 2020; Lau et al. 2022). Communal signals involve the coordinated acoustic signaling between 2 individuals (duets) or more (choruses), often comprised of a pair or group members (Hall 2004, 2009; Tobias et al. 2016). Although these signals serve various functions, empirical evidence suggests their main function is to cooperatively joint defend resources or territory (Langmore 1998; Hall 2004, 2009; Dahlin and Benedict 2014; Tobias et al. 2016).
The few studies on communal signaling species have found that individuals can discriminate between neighbors and strangers based on signal features (Wiley and Wiley 1977; Bradley et al. 2013; Christensen and Radford 2018; Amorim et al. 2022; Spezie et al. 2023), with a few exceptions (Milani 1985; Battiston et al. 2015). However, it often remains unclear if communal signals convey information about individual or pair identities and whether receivers use individual or pair features for discrimination. An exception is found in a study on the kōkako (Callaeas wilsoni), showing the birds’ ability to discriminate between familiar and unfamiliar pairs based on either sexes’ contribution to a duet (Bradley et al. 2013). Such discrimination ability is challenging because the occurrence of song overlapping within duets and choruses may complicate the assessment of individual identities. Instead, communal signals could provide more reliable cues for identifying pairs. In addition, the temporal structure of duets (de Reus et al. 2021), may result from both individual and pair-level attributes (de Reus et al. 2021; Lau et al. 2022). Consequently, selection may favor individual ability to recognize pairs or groups over individuals, especially in species that produce duets or choruses for deterring territorial intruders. This pattern appears to be the case in crimson-breasted shrikes (Laniarius atrococcineus), in which duets are more reliable indicators of pair identity than solos are indicators of individual identity (van den Heuvel et al. 2013). In northern gray gibbons (Hylobates funereus), the vocal response of males to female calls within duets acts as a more consistent indicator of pair identity than of individual male identity (Lau et al. 2022). Nevertheless, individual contributions to duets may still confer reliable and stable information about the identity of duetting individuals and pairs, as demonstrated in crane species (Balearica regulorum, Grus japonensis, Leucogeranus leucogeranus) (Budde 2001; Klenova et al. 2009a, 2020), and the northern gray gibbons (Lau et al. 2022).
The rufous hornero (Furnarius rufus) is a Neotropical, socially monogamous suboscine bird known for producing polyphonal duets, characterized by a high degree of overlapping between sex-specific songs (Roper 2005; Diniz et al. 2018). The rufous hornero is also known for long-term pair bonding (Amorim et al. 2023b) and low extrapair paternity (Diniz et al. 2019). Previous studies suggest that hornero partners engage in year-round duetting to collectively defend a shared territory and partnership, although individuals also produce solo songs (Diniz et al. 2018, 2020). Duetting is assumed to be innate in this tracheophone suboscine (Ten Cate 2021; Amador and Mindlin 2023), yet these birds are able to produce duets with complex temporal structure (Amador et al. 2005), to discriminate between duets of neighbors and strangers regardless of context (Amorim et al. 2022, 2023a), and to discern variations in duet coordination patterns (Diniz et al. 2021). Therefore, this species appears to be an adequate model for evaluating acoustic individuality in a communal signaling context.
We investigated whether duets of the hornero contain acoustic information that would allow classifying populations, pairs, sexes, and individuals. We also examined the contribution of each class (e.g., population, pair) to the multivariance in individual-level acoustic traits of duets. To achieve these aims, we analyzed refined acoustic traits across levels of duet (e.g., complete duets to non-overlapping syllable parts) and analysis (pair or individual) using a dataset of duets recorded in 2 geographically distant populations (~820 km apart) (Diniz et al. 2018; Amorim et al. 2022). We initially predicted that acoustic traits of duets would accurately classify and explain variance in descending order: population, sex, pair, and individual. This prediction was based on the assumption that innate suboscine songs may vary geographically, driven by isolation by distance, geographic barriers, hybridization, morphology, and/or the environment (Capelli et al. 2020; Acero-Murcia et al. 2021; Maldonado-Coelho et al. 2023). The second prediction was that duet contributions would vary between sexes, consistent with prior studies in this species (Laje and Mindlin 2003; Roper 2005). Finally, we expected better classification of pairs in comparison to individuals, given the high degree of overlap between partner songs in duets (Diniz et al. 2021), potentially complicating the decoding of individual signatures. The prevalence and relevance of duets, compared with solos, in hornero territory defense (Diniz et al. 2018, 2020, 2021) and breeding success (Diniz et al. 2019) further support this prediction.
Materials and Methods
Duet recordings
We analyzed a dataset of duets recorded in 2 populations of rufous horneros located approximately 820 km apart. Both populations are within urban university campuses, one in Brasília, central Brazil (15°45ʹS, 47°51ʹW, elevation: 1,032 m), and the other in Juiz de Fora, southeastern Brazil (21°46ʹS, 43°22ʹW, elevation: 847 m). The duets were recorded using Marantz PMD660 recorders and Sennheiser ME66/K6 unidirectional microphones, at a distance of about 10 m from the birds (settings, sampling rate: 44.1 or 48 kHz, resolution: 16 or 24 bits) (Diniz et al. 2018, 2020; Amorim et al. 2022). Duets were recorded in Brasília from 2013 to 2015 and in Juiz de Fora in 2019 (Diniz et al. 2019; Amorim et al. 2022). The populations were partially banded (see below), and we assumed that the identities of unbanded individuals did not change, whether they were recorded once or multiple times in the same year and within the same territory. In the Brasília population, our estimates indicate that partners remain together for over 1 year in nearly 60% of the pairs (Amorim et al. 2023b). In addition, partner replacement is often followed by a high production of duets and the increasing number of aggressive interactions with neighbors and strangers (P.D., personal observation).
To ensure data quality, we applied 2 inclusion criteria for duet recordings. First, we selected recordings with a minimum of 5 duets from the same pair to ensure sufficient statistical power for the discriminant function analysis (Williams and Titus 1988). Second, we included only duets produced by unique pair members and territories to maintain data independence (Colegrave and Ruxton 2018). Thus, we excluded duets involving different pairs or individuals of the same sex recorded at the same territory, which occurred in cases of partner or pair replacements, adult duetting with a juvenile, or when bird identities could not be determined.
Our final analysis comprised 471 duets from 43 pairs (mean ± SD = 11 ± 7 duets per pair, range: 5–35), with 394 duets from 31 pairs recorded in Brasília and 77 from 12 pairs in Juiz de Fora. Among the duets, 356 had both pair members banded (n = 31 pairs), and 115 had only one banded individual (n = 12 pairs). The duets included spontaneous productions as well as duets elicited by conspecific playbacks in experimental studies. These playbacks consist of recording vocal responses to a single broadcasted duet per treatment (Diniz et al. 2020; Amorim et al. 2022). The recordings were made during both breeding and non-breeding seasons. Previous studies have shown that conspecific song playbacks generally do not affect the duration of replying duets (Diniz et al. 2020; Amorim et al. 2022), but duets are shorter during the breeding season and influenced by the time of day (Diniz et al. 2018). Some pairs were recorded in different years (n = 11), whereas the majority were recorded within a year (n = 32). However, we could not evaluate whether duet signatures remain consistent across years (Klenova et al. 2020; Calcari et al. 2021; Chelysheva et al. 2023), given the small number of duets recorded per pair (see above). Therefore, we tested whether the acoustic structure of duets could be used for classifying populations, pairs, sexes, and individuals, although we acknowledge the potential confounding and uncontrolled effects of context (aggressive vs. non-aggressive), season, time of day, and pair bond duration.
Acoustic analyses
We easily differentiated the fast-paced syllables of males from the slow-paced syllables of females (Roper 2005) by visualizing oscillograms (amplitude envelopes) and spectrograms (window: Hann, window length: 512, overlap: 50) in Raven Pro 1.6.4 (K. Lisa Yang Center for Conservation Bioacoustics 2022). Recordings were standardized in sampling rate (44.1 kHz) and resolution (16 bits) with Adobe Audition 2015.0. We selected the onset and offset of each syllable produced by each sex in each duet in Raven Pro 1.6.4. Acoustic traits from duets (pair level) and individual contributions (individual level) were extracted using the warbleR package version 1.1.28 (Araya-Salas and Smith-Vidaurre 2017) in R 4.1.1. Measurements are detailed in the following subsection, categorized into spectro-temporal characteristics and temporal structure within duets (gaps and overlap between syllables) (Table 1).
Table 1.
Summary of the spectro-temporal and temporal structure measurements extracted from various hierarchical levels of duets (Figure 1)
| Level of analysis | Duet level | Spectro-temporal measurements (Sp) | Temporal structure measurements (Ts) |
|---|---|---|---|
| Pair | Complete duet (CoD) | Yes | Yes |
| Overlapping duet section (OvS) | Yes | Partially | |
| Overlapping syllable parts (OvP) | Yes | ||
| Individual | Female duet phrase | Yes | |
| Male duet phrase | Yes | ||
| Non-overlapping female syllable parts | Yes | ||
| Non-overlapping male syllable parts | Yes |
Pair-level analyses
At the pair level of analysis, we assessed spectro-temporal measurements for 3 sublevels: complete duets, overlapping duet sections, and overlapping syllable parts (Figure 1, Table 1). The complete duet spanned the onset of the first syllable to the offset of the last syllable, regardless of sex. The overlapping duet section covered the time interval between (1) the onset of the syllable produced by the first bird that started singing in a duet that preceded the first syllable produced by the second individual, and (2) the offset of the first syllable produced by a bird just after its partner finished its contribution to the duet (song_analysis function; Araya-Salas and Smith-Vidaurre 2017). Thus, the overlapping duet section excludes most of the introductory and ending syllables, keeping only the last introductory syllable and first ending syllable. The overlapping syllable parts refer to each of the multiple intervals within a duet where partners overlap their syllables, excluding non-overlapping parts.
Figure 1.
Hierarchical structure of rufous hornero’s duets. At the pair level of analysis, 3 levels of duet were considered: complete duets, overlapping duet sections, and overlapping syllable parts. At the individual level of analysis, 2 levels of analysis were considered: duet phrases and non-overlapping syllable parts. The spectrogram was retrieved and modified from Diniz et al. (2021)
We obtained 26 spectro-temporal measurements for the 3 sublevels (i.e., complete duets, overlapping duet sections, and overlapping syllable parts) using the spectro_analysis function with the following settings: frequency range: 0.5–22.05 kHz, overlap: 90, window length: 216 for the time domain and 2,048 for the frequency domain (Araya-Salas and Smith-Vidaurre 2017). These measurements include temporal aspects, such as the duration of a duet or its overlapping section, median time, time quartiles, and time entropy, and frequency characteristics, such as median frequency, mean frequency, frequency quartiles, and spectral entropy (Table 2). In addition to the spectro-temporal measurements, we computed 179 descriptive statistics on Mel-frequency cepstral coefficients (MFCC) for the 3 sublevels using mfcc_stats function of warbleR package (settings: frequency range: 0.5–22.05 kHz, overlap: 90, window length: 512). Descriptive statistics on MFCC include mean, median, minimum, maximum, skewness, kurtosis, and variance, plus mean and variance of first and second derivatives (Lyon and Ordubadi 1982; Araya-Salas and Smith-Vidaurre 2017; Clink et al. 2021). The short frequency ranges aim to address low-frequency background noise. Measurements taken for the overlapping syllable parts were averaged within each duet.
Table 2.
Spectro-temporal measurements taken from complete duets, overlapping duet sections, overlapping syllable parts, and non-overlapping syllable parts
| Acoustic measurements | Code | Description |
|---|---|---|
| Duration (s) | DUR | Signal length |
| Mean frequency (kHz) | AVF | Weighted average of the frequency spectrum based on the amplitudes within a specific frequency range (0.5–22.05 kHz) |
| Standard deviation of frequency (kHz) | STF | Standard deviation of the frequency spectrum, with each frequency component weighted according to its amplitude within a specific frequency range |
| Median frequency (kHz) | MEF | Frequency at which the frequency spectrum is split into 2 equal-energy frequency intervals |
| Frequency Q25% (kHz) | F25 | Frequency where the frequency spectrum is divided into 2 intervals with 25% and 75% of the energy, respectively |
| Frequency Q75% (kHz) | F75 | Frequency where the frequency spectrum is divided into 2 intervals with 75% and 25% of the energy, respectively |
| Interquartile frequency range (kHz) | IFR | Frequency interval between Frequency Q25% and Frequency Q75% |
| Median time (s) | MET | Time point at which the time envelope is split into 2 equal-energy time intervals |
| Time Q25% (s) | T25 | Time where the time envelope divided into 2 intervals with 25% and 75% of the energy, respectively |
| Time Q75% (s) | T75 | Time where the time envelope divided into 2 intervals with 75% and 25% of the energy, respectively |
| Interquartile time range (s) | ITR | Time interval between Time Q25% and Time Q75% |
| Skewness | SKW | Frequency spectrum asymmetry indicates whether the spectrum is skewed to left (S < 0) or right (S > 0) |
| Kurtosis | KUT | Peakedness of the frequency spectrum indicate how much the spectrum deviates from a normal shape (K = 3) |
| Spectral entropy | SEM | Distribution of energy in the frequency spectrum from pure tone (0) to noisy (1) |
| Time entropy | TEM | Distribution of energy in the time envelope from amplitude concentrated in a specific point (0) or spread throughout the recording (1) |
| Spectrographic entropy | SPE | Product of spectral entropy and time entropy from pure tone (0) to noisy (1) (no unit) |
| Spectral flatness | SPF | Similar to spectral entropy from pure tone (0) to noisy (1) (no unit) |
| Mean dominant frequency (kHz) | AVD | Mean of dominant frequency measured across time points in the spectrogram |
| Minimum dominant frequency (kHz) | MID | Minimum of dominant frequency measured across time points in the spectrogram |
| Maximum dominant frequency (kHz) | MAD | Maximum of dominant frequency measured across time points in the spectrogram |
| Dominant frequency range (kHz) | DFR | Range of dominant frequency measured across time points in the spectrogram |
| Modulation index | MOD | Cumulative (absolute) variations between consecutive measurements of dominant frequency divided by dominant frequency range |
| Start dominant frequency (kHz) | SDF | Frequency at the start of the signal |
| End dominant frequency (kHz) | EDF | Frequency at the end of the signal |
| Dominant frequency slope (kHz/s) | DFS | Change in dominant frequency expressed as ((End dominant frequency—Start dominant frequency)/Duration) |
| Mean peak frequency (kHz) | MPF | Highest-energy frequency within the mean frequency spectrum of a time wave, where the mean frequency spectrum represents the mean relative amplitude of the frequency distribution |
Spectro-temporal measurements were extracted from acoustic signals using the spectro_analysis function from warbleR package in R (Araya-Salas and Smith-Vidaurre 2017).
At the pair level of analysis, we considered 8 temporal structure measurements for complete duets and 3 of these measurements for overlapping duet sections (Table 1, Figure 1). Measurements taken from complete duets include the absolute time interval between the onsets of female song and male song, duet sound density (proportion of the duet duration composed of syllables, excluding gaps), song overlap degree (proportion of duet duration with overlapping female and male songs), syllable rate (total number of syllables produced by both sexes divided by the duet duration), and coordination score (Table 3). In overlapping duet sections, we measured sound density, syllable rate, and coordination score. The coordination score was obtained through a Monte Carlo randomization test (test_coordination function of warbleR package), comparing the observed proportion of duet duration with overlapping female and male songs to a randomly expected degree of song overlapping based on each singer’s phrase duration. The score was calculated using the equation: (observed overlap—mean random overlap)/ mean random overlap, with 1,000 iterations and randomized signals and gaps (Masco et al. 2015; Araya-Salas and Smith-Vidaurre 2017). Positive coordination scores indicate song overlapping, whereas negative scores suggest song alternation.
Table 3.
Temporal structure measurements taken for complete duets at the pair level of analysis
| Temporal structure measurements | Code | Description |
|---|---|---|
| Interval starts (s) | INS | Absolute difference between the starts of female and male songs |
| Interval ends (s) | INE | Absolute difference between the ends of female and male songs |
| Coordination score | COR | (Observed amount of overlap between female and male syllables—mean overlap expected by chance)/ mean overlap expected by chance |
| Number of syllables | SYL | Number of syllables in a duet |
| Syllable length (s) | SLE | Mean duration of syllables in a duet |
| Syllable rate | SRT | Number of syllables divided by the duration of complete duet or overlapping duet section |
| Duet sound density | DSD | Proportion of duration of complete duet or overlapping duet section composed by syllables and excluding gaps |
| Song overlap degree | SOD | Proportion of duration of complete duet in which female and male songs were overlapped in time |
Temporal structure measurements were extracted from acoustic signals using the functions gaps, song_analysis, and test_coordination from warbleR package in R (Araya-Salas and Smith-Vidaurre 2017). In addition, measurements COR, SRT, and DSD were obtained for overlapping duet sections.
Individual-level analyses
At the individual level of analysis, we collected spectro-temporal measurements for non-overlapping syllable parts and temporal structure measurements for duet phrases, representing each individual’s contribution to the duet regardless of syllable type (Table 1, Figure 1). To conduct this analysis, we followed the same procedures used in the pair-level analyses to extract spectro-temporal parameters and compute descriptive statistics on MFCC. Because horneros are polyphonal duetters, individual songs could not be extracted (Diniz et al. 2021). Instead, we focused on the non-overlapping syllable parts. If 2 syllables overlapped partially, we only considered the 2 non-overlapping parts (one for each sex) for measurement. We then averaged measurements taken across the syllables of one individual in a duet, resulting in one data point for each combination of measurement, sex, and duet. The 11 temporal structure measurements at the individual level include phrase duration (duration of female or male song in a duet), sound density (proportion of phrase duration composed by syllables, excluding gaps), mean syllable duration, mean syllable rate, and mean gap duration (interval between consecutive syllables) (Table 4). Please note that temporal structure measurements are unaffected by syllable overlap between sexes.
Table 4.
Temporal structure measurements taken for duets at the individual level of analysis
| Temporal pattern measurements | Code | Description |
|---|---|---|
| Phrase length (s) | PLE | Duration of female or male phrase |
| Minimum syllable duration (s) | MIS | Minimum duration of female or male syllables in a duet |
| Mean syllable duration (s) | MES | Mean duration of female or male syllables in a duet |
| Maximum syllable duration (s) | MAS | Maximum duration of female or male syllables in a duet |
| CV syllable duration (%) | CVS | Coefficient of variation for the duration of female or male syllables in a duet |
| Minimum silence gap (s) | MII | Minimum duration of silence gaps between syllables of the same individual in a duet |
| Mean silence gap (s) | MEI | Mean duration of silence gaps between syllables of the same individual in a duet |
| Maximum silence gap (s) | MAI | Maximum duration of silence gaps between syllables of the same individual in a duet |
| CV silence gap (%) | CVI | Coefficient of variation for the duration of silence gaps between syllables of the same individual in a duet |
| Syllable rate | SRI | Number of female or male syllables divided by the duration of its respective phrase in a duet |
| Phrase sound density | PSD | Proportion of duration of individual contribution to a duet (i.e., phrase) composed by syllables and excluding gaps |
Temporal structure measurements were extracted from acoustic signals using the functions gaps, song_analysis, and test_coordination from warbleR package in R (Araya-Salas and Smith-Vidaurre 2017).
Statistical analyses
General procedures
We utilized 4 statistical approaches in R 4.1.1 (R Core Team 2023): (1) permuted linear discriminant function analyses (pDFA; Mundry and Sommer 2007); (2) and discriminant function analyses (DFA) for classification of individuals and pairs; (3) permutational multivariate analysis of variance (PERMANOVA) to estimate the variance explained by classes (population, pair, sex, and individual); and (4) potential of identity coding (PIC) to rank the individual acoustic traits by importance in classification accuracies obtained by the multivariate approaches listed above.
Before conducting each pDFA, PERMANOVA, and DFA, we performed a pre-processing step on the input acoustic data using the preProcess function from the caret 6.0-93 R package (Kuhn 2008). This pre-processing consisted of data centering and scaling, variable transformation using the Yeo-Johnson transformation, and data dimensionality reduction through principal component analysis to eliminate collinearity. The resulting principal component scores (PC scores), which collectively explained 95% of the variance in the original data, were then used as explanatory variables in the pDFAs. The number of PC scores varied from 3 to 103 across analyses, based on the number of input measurements and their correlation (Supplementary Table S1).
Permuted linear discriminant function analyses
The pDFA assess whether duet traits could predict the identity of populations, sexes, pairs, and individuals. In bioacoustics, both pDFA and classical discriminant function analysis (DFA) are employed to evaluate the effectiveness of acoustic traits to discriminate classes, often involving individuals or sexes (Terry et al. 2005; Wyman et al. 2022). The advantage of pDFA lies in its ability to handle non-independent datasets, such as cases where songs of the same individual are recorded in different contexts or to distinguish sexes while accounting for pseudo-replication within individuals (Mundry and Sommer 2007). The pDFA employs a permutation approach to calculate the significance of discriminability between classes. The permuted data retains the non-independent structure of the dataset, and the discriminability is then compared between the original and randomized (permuted) data (Mundry and Sommer 2007). The results are presented in terms of observed and expected values of cross-classification.
We performed 4 sets of nested pDFAs using the MASS R package with 1,000 permutations and 100 random selections. Our aim was to classify the levels of the 4 classes: population, sex, pair, and individual. The first 2 sets (population and pair) comprised 7 pDFAs each, exploring combinations of sublevels of analysis (complete duets or overlapping duet section) and types of measurement (spectro-temporal, temporal structure, or both). In addition, we performed 2 pDFAs on the spectro-temporal traits of overlapping syllable parts to assess the populations and pairs discrimination. To ensure accuracy, we included pair identity as a control in the pDFAs for discriminating sites and site identity as a restriction for discriminating pairs (Mundry and Sommer 2007).
The third set of pDFAs aimed to classify sexes and included 3 pDFAs: one combining temporal structure measurements from duet phrases and spectro-temporal features from non-overlapping syllable parts, the second using temporal structure measurements from duet phrases, and the third using spectro-temporal measurements from non-overlapping syllable parts. To ensure accuracy, we added individual identity as a control factor and pair identity as a restriction factor in these pDFAs to discriminate sexes. The fourth set of pDFAs classified individuals, with 3 pDFAs for females and 3 pDFAs for males. The first pDFA (for each sex) included spectro-temporal measurements from non-overlapping syllable parts and temporal structure measurements from duet phrases, the second pDFA used temporal structure measurements from duet phrases, and the third pDFA used spectro-temporal measurements from non-overlapping syllable parts. Here, we did not include any control or restriction factors to discriminate individuals.
We reanalyzed our data with discriminant function analyses (DFA; lda function; MASS package) following the same procedures as described above for pDFAs, with one exception. The pDFA outperform the DFA because it includes control or restriction factors to account for non-independence within the dataset. However, the reason for performing the DFAs was to ensure the comparability of our results with other studies that used DFA but not pDFA (e.g., Klenova et al. 2009a; Odom et al. 2013; Chen et al. 2020). To evaluate the performance of the DFAs in accurately predicting classes (e.g., sexes), we employed a leave-one-out cross-validation procedure (Průchová et al. 2017). This method involves training the model with all but one duet and then testing the model with the left-out signal. This process was repeated for all duets. Subsequently, we computed the percentage of signals that were correctly assigned.
Permutational multivariate analysis of variance
We conducted multiple PERMANOVAs using adonis2 function in vegan package v. 2.5-7 (999 permutations; Oksanen et al. 2020) to compute the proportion of variance in the multivariate acoustic traits explained by population, pair, sex, and individual. The R² was used as a metric of variance explained by each variable (Oksanen et al. 2020).
We performed 3 sets of PERMANOVAs, including the normalized Euclidean distance matrices of all acoustic variables of interest as response variables. The first set included 7 PERMANOVAs comparing how acoustic traits vary between populations and pairs (explanatory variables) at the pair level. We explored different sublevel analyses (complete duets or overlapping duet sections) and measurement types (spectro-temporal, temporal structure of syllables, or both), and also performed a separate PERMANOVA on spectro-temporal traits of overlapping syllable parts. The second set of PERMANOVAs aimed to compare the contributions of population, sex, pair, and individual (explanatory variables) to the variance explained in individual-level traits of duets. It comprised 3 PERMANOVAs: one that merged temporal structure measurements of syllables from duet phrases with spectro-temporal features from non-overlapping syllable parts, the second using only temporal structure of syllables from duet phrases, and the third utilizing solely spectro-temporal measurements from non-overlapping syllable parts. The third and final set of PERMANOVAs replicated those in the second set, with the distinction of being conducted separately for each sex and with individual identity as the only explanatory variable. This third set of PERMANOVAs aimed to examine how individual vocal signatures vary between sexes.
Potential of identity coding
The PIC for a trait is determined by the ratio of the coefficient of variation (CV) between levels of a class (e.g., individuals) by the within-class CV. The PIC is widely used across passerine species, enabling direct comparison of acoustic traits in predicting class identity across studies (Vignal et al. 2004; Kennedy et al. 2009; Clark and Leung 2011; Hahn et al. 2013). A previous study shows that PIC values are strongly correlated with the reliable Beecher’s information statistic (r > 0.99) and are robust against variations in the number of observations or entities (e.g., individuals) (Linhart et al. 2019). We calculated the PIC for each spectro-temporal and temporal structure measurement (excluding MFCC) to identify the most potentially valuable traits for predicting class identity (i.e., site, pair, sex, and individual) (Robisson et al. 1993; Linhart et al. 2019). We recognize this approach may inflate the chance of finding multiple informative traits, but our goal was to rank acoustic traits by their identity informative value, which should be preserved regardless of the number of traits (Keenan et al. 2020). To ensure accurate PIC values, we converted negative values to positive ones by adding the minimal constant value needed to variables dominant frequency slope and coordination score (COR) (Supplementary Tables S2 and S3) that originally contained negative values. Spectro-temporal and temporal structure measurements with PIC values greater than one were considered highly informative for class coding (Hahn et al. 2013).
Results
Pair-level duet traits
We conducted pDFAs to classify populations and pairs based on pair-level acoustic metrics of different levels of analysis: complete duets, overlapping duet sections, and overlapping syllable parts. The efficiency of pDFA was assessed by comparing mean observed and expected cross-classifications across different metrics and levels of analysis. Pair-level metrics showed higher efficiency in classifying duet-producing pairs (observed: 45%; expected: 4%; difference: 41%, all P < 0.01, Figure 2B) than their respective populations (observed: 84%; expected: 55%; difference: 29%, all P < 0.02) (Figure 2A, Supplementary Table S1). These results were similar regardless of the level of analysis (Supplementary Table S1). Spectro-temporal measurements had a greater impact on the correct classification of pairs and populations (observed cross-classification, pairs: 42–67%, populations: 86–92%) than temporal structure measurements (pairs: 6–7%, populations: 64–74%). Combining input data from both measurements did not significantly improve pDFA efficiency (Figure 2, Supplementary Table S1).
Figure 2.
Percentage of populations (A), pairs (B), sexes, and individuals (C) of rufous horneros correctly cross-classified (bars) compared to chance expectation (dashed lines), based on permuted discriminant function analyses (pDFAs) on duet traits. pDFAs using pair-level traits (A and B) were conducted across different hierarchical levels of duet (CoD: complete duet; OvS: overlapping duet section; OvP: overlapping syllable parts) and types of acoustic measurements (Sp: spectro-temporal; Ts: temporal structure). pDFAs using individual-level duet traits (C) were conducted on spectro-temporal measurements of non-overlapping syllable parts (Sp) and/or temporal structure measurements of duet phrases (Ts).
PERMANOVA analyses and pDFAs produced consistent results. Pair identity explained more variance than population in pair-level duet attributes (average partial R² across analyses: 26% vs. 5%, Figure 3B). These results were consistent across levels of analysis (Supplementary Table S2). On average, pair and population explained slightly more variance (partial R² difference: 3–5%) in spectro-temporal measurements than in temporal structure measurements (Figure 3B, Supplementary Table S2).
Figure 3.
Variance in individual-level (A) and pair-level (B) duet traits explained by the identities of population, pair, sex, and individual. Variance explained is interpreted as partial R² values obtained by PERMANOVAs. Specifically, for pair-level duet traits, the analysis considers population and pair identities, whereas for individual-level duet traits, the analysis encompasses population, pair, sex, and individual identities. PERMANOVAs using pair-level traits (A and B) were conducted across different hierarchical levels of duet (CoD: complete duet; OvS: overlapping duet section; OvP: overlapping syllable parts) and types of acoustic measurements (Sp: spectro-temporal; Ts: temporal structure). PERMANOVAs using individual-level duet traits (C) were conducted on spectro-temporal measurements of non-overlapping syllable parts (Sp) and/or temporal structure measurements of duet phrases (Ts).
The majority of spectro-temporal and temporal structure variables used for classifying populations and pairs showed PIC values that indicate their usefulness for discrimination (Figure 4 and Supplementary Figure S1, Supplementary Table S3). The mean peak frequency of complete duets and overlapping duet sections had particularly high PIC values (1.25–1.59) for both population and pair classifications (Figure 4). On average, temporal structure measurements had equivalent PIC values (pairs: 1.29, populations: 1.10) to spectro-temporal measurements (pairs: 1.27, populations: 1.10) (Supplementary Figure S1, Supplementary Table S3).
Figure 4.
The potential of several spectro-temporal measurements and temporal structure of syllables of rufous hornero duets for discrimination of populations, pairs, sexes, and individuals of each sex, according with Potential of Identity Coding (PIC) values. In this study, we assume that values higher than 1 (indicated by dashed lines) might facilitate discrimination (Hahn et al. 2013). The acoustics measurements with the 10 highest PIC values for each class are shown as acronyms. For detailed descriptions of acoustic measurements and PIC values, refer to Tables 2–3.
Individual-level duet traits
We conducted pDFAs to classify sexes and individuals based on individual-level acoustic metrics of duet phrases and non-overlapping syllable parts. As expected, the pDFAs were highly efficient in differentiating sexes (average cross-classifications, observed: 98%, expected: 59%, difference: 39%, all P = 0.001) (Figure 2C, Supplementary Table S1). The pDFAs accurately differentiated sexes with temporal structure measurements from duet phrases (observed cross-classification: 98%) and spectro-temporal measurements from non-overlapping syllable parts (observed cross-classification: 96%). Combining both measurements did not improve the efficiency of pDFA for sex classification (observed cross-classification: 39%) (Figure 2C, Supplementary Table S1).
The pDFAs were similarly efficient in classifying individuals (average cross-classifications, observed: 47%, expected: 2%, difference: 45%, all P = 0.001) and pairs (difference: 41%, see above) (Figure 2C, Supplementary Table S1). Combining temporal structure measures from duet phrases and spectro-temporal measures from non-overlapping syllable parts slightly improved the performance of pDFAs in classifying individuals (observed cross-classification, females: 54%, males: 63%). However, individuals were less accurately classified when pDFAs were built based solely on temporal structure measurements of duet phrases (observed cross-classification, females: 18%, males: 19%). The pDFAs showed higher performance in differentiating males than females, especially when considering spectro-temporal measurements (observed cross-classifications, females: 48%, males: 59%) (Figure 2C, Supplementary Table S1).
Temporal structure measures had higher average PIC values for sex classification (1.54) than spectro-temporal measures (1.20). Measures related to syllable duration and mean silence interval between syllables showed the highest potential for identifying sexes (PIC > 2.0), whereas median frequency was particularly effective among spectro-temporal measurements (PIC = 1.74) (Figure 4 and Supplementary Figure S1, Supplementary Table S3). PIC values indicated that temporal structure measurements were on average more efficient in identifying individuals (females: 1.38, males: 1.47) than spectro-temporal measurements (females: 1.30, males: 1.30), which did not include the MFCC coefficients from pDFAs. Sound density had the highest potential to differentiate females (PIC = 1.74), whereas maximum silence interval between consecutive syllables (PIC = 1.82) and median frequency of non-overlapping syllable parts (PIC = 1.65) had the highest potential to classify males (Figure 3 and Supplementary Figure S1, Supplementary Table S3).
PERMANOVA analyses revealed that population, sex, pair, and individual explained 42% (partial R²) of variance in individual-level attributes of duets (spectro-temporal plus temporal structure measurements) (Figure 3A, Supplementary Table S2). Pair identity most efficiently explained the variation in spectro-temporal features (16%), whereas sex most efficiently explained the variance in temporal structure measures (33%). Population identity explained little variance in spectro-temporal (3%) and temporal structure measurements (0.4%). Individual identity explained 11% and 7% of variance in spectro-temporal and temporal structure features, respectively (Figure 3A). When sexes were analyzed separately, individual identity explained 32–33% of variance in female acoustic traits and 35–40% of variance in male acoustic traits (Supplementary Table S2).
Discussion
Multiple signatures in rufous hornero duets
We found evidence that hornero duets contain information for identifying pairs and individuals, aligning with previous evidence of these birds discriminating between pairs of immediate neighbors and unfamiliar individuals presumably based on acoustic cues embedded in their duets (Amorim et al. 2022, 2023a). More specifically, we observed moderately correct cross-classifications for pairs (45%) and individuals (47%) (Figure 2), suggesting that acoustic information of both pairs and individuals can effectively be used to distinguish conspecifics, such as neighbors and strangers. However, we cannot discard the possibility that pair discrimination may also occur through a common group signature in the hornero neighborhood (Radford 2005). Although untested in this species, the observed signatures are also likely adequate for true recognition of pairs and/or individuals (Wiley and Wiley 1977; Akçay et al. 2009), given the similar abilities of a suboscine solo singing bird, the Alder flycatcher, Empidonax alnorum (Tyrannidae), to individually recognize each conspecific (Lovell and Lein 2005).
Although the analyses had similar success in classifying pairs (45%) and individuals (47%) (Figure 2), the pairs explained more variation in pair-level duet traits (26%) than individuals did in individual-level traits (11%) (Figure 3). The difference in classification accuracy between pairs and individuals may be due to variations in measurement quantity and types at the pair and individual levels of duets (Table 1–4). Because partners partially overlap syllables in duets, we relied on spectro-temporal measurements at non-overlapping syllable parts to capture individual variation (Figure 1). Nevertheless, pairs explained slightly more variation (11–16%) than individuals (7–11%) in the multivariate individual-level traits of duets. When analyzing each sex separately we observed an increase in the variation explained in individual-level traits (32–40%) by individual identity (Supplementary Table S2). Therefore, pair signature in hornero duets is moderately stronger than individual signature. This may be attributed to the duets result from the linear efforts of each partner toward achieving synchronization (Wirthlin et al. 2019).
Pair-level metrics outperformed population-level metrics in classifying pairs (average cross classification difference: 41% for pairs, 29% for populations) (Figure 2). Furthermore, pair identity explained more variance than population identity, both in pair-level (average partial R² across analyses: 26% vs. 5%) and individual-level duet attributes (11–16% vs. 0–3%). These results do not support our initial prediction of widespread geographic variation in duet traits due to factors like isolation by distance and environmental variation (Capelli et al. 2020; Acero-Murcia et al. 2021; Maldonado-Coelho et al. 2023). They are similar to findings observed in a solo singing suboscine (Foote et al. 2013). Instead, we found a stronger selection for among-pair variation within hornero populations, in line with the consistently strong neighbor-stranger discrimination ability observed across both populations (Amorim et al., 2024). Recognizing neighboring pairs through signal differentiation or tuned signal perception is crucial for reducing territorial conflicts in competitive environments, especially when pairs occupy territories for extended periods (Amorim et al. 2022, 2023b), in addition to threats posed by neighbors over resources and/or pair bonds (Akçay et al. 2009).
Our findings point that pair signatures in hornero duets remains consistent regardless of within-individual variation across duets of the same pair. The extensive syllable overlap in duets (Diniz et al. 2021) may pose challenges for individual discrimination, but pair identification might remain robust. Horneros use duets to defend year-round territories and maintain the pair bond (Diniz et al. 2018, 2020), with pair-level attributes signaling coalition quality (Diniz et al. 2021) and predicting breeding success (Diniz et al. 2019). Further studies could evaluate whether horneros rely on pair or individual-level duet attributes in discriminating neighbors from strangers (Bradley et al. 2013), and investigate recognition when a neighbor’s partner is replaced.
Although our results suggest that duets contain some acoustic information that may facilitate individual discrimination of pairs and other individuals, 58% of the multivariate variance in duet acoustic features remains unexplained by population, sex, pair, and individual classes (Figure 3, Supplementary Table S2). This pattern highlights the significant role of among-duet variation within pairs. Factors such as time of day, breeding phenology (Diniz et al. 2018), duet coordination, and aggressive context (Diniz et al. 2021) may contribute to this unexplained variation in hornero duets. In addition, an individual’s response to its partner-initiated song from a distance might introduce variation in time entropy, frequency modulation, and temporal coordination across duets, emphasizing the role of spatial and temporal factors in shaping acoustic traits (Ręk and Magrath 2020).
The role of temporal structure and spectro-temporal features
Our analyses reveal that spectro-temporal features outperform temporal structure within duets in discriminating multiple classes, excepting sexes (Figures 2 and 3). Duet pitch, bandwidth, and frequency modulation show the highest PIC values for population differentiation (Figure 4). Interestingly, the dominant frequency at the duet’s onset has the second-highest PIC value for encoding pair identity (Figure 4), indicating its potential for rapid identification of other pairs. This supports that selection favors vocal signatures at the beginning of duets (Beecher 1982) for quick responses to territorial threats, such as intrusions by competitor pairs aiming to usurp territories (Amorim et al. 2022).
Spectro-temporal features outperformed temporal structure within duets in classifying populations, pairs, and individuals (Figure 2). The inclusion of 26 spectral and temporal measurements, along with 179 descriptive statistics on MFCC for each pair-level and individual-level analysis, contributed to this result. In contrast, temporal structure measurements ranged from 3 to 11 depending on the analysis level (Tables 2–4). However, although temporal structure measures showed lower accuracy in individual classification, they had the highest PIC for females and males (Figure 2). Combining temporal structure and spectro-temporal measures at the individual level improved cross-classification within sexes (females: 54%, males: 63%) (Figure 3). Our individual-level spectro-temporal measures focused on non-overlapping syllable parts (Table 1), acknowledging that most syllables partially overlap in duets (Laje and Mindlin 2003).
The most effective cues for coding female identity were sound density and energy distribution across frequency range, whereas silence intervals and song pitch were better for encoding male identity (Figure 4). These sex-specific acoustic features may facilitate simultaneous recognition of duetting individuals. Temporal structure measurements related to syllable duration and rate, along with median and mean frequency, performed well in coding sexes (Figure 4). Our results coincide with previous studies on horneros, suggesting that females produce slower-paced songs with longer and higher-pitched syllables than males (Laje and Mindlin 2003; Roper 2005; Diniz et al. 2018). Further studies could explore how rhythm and temporal coordination influences vocal signature in duets (Clink et al. 2020), considering correlated song durations between sexes in duets (Diniz et al. 2020) and the potential role of male song tempo as a non-linear forcing oscillator (Laje and Mindlin 2003).
Although our study suggests that some acoustic traits may be more reliable than others for distinguishing individuals and pairs, further experimental studies are needed to determine whether birds actually use these traits for discrimination. For example, male corncrakes (Crex crex) cannot distinguish neighbors from strangers based on variation in amplitude peaks between pulses, a highly individual-specific call trait (Budka and Osiejuk 2014). Instead, they may rely on factors like formant dispersion (Budka and Osiejuk 2013).
Acoustic monitoring of pairs and individuals
The increasing use of passive acoustic monitoring for individual monitoring (Adi et al. 2010; Budka et al. 2015; Stowell et al. 2019; Bedoya and Molles 2021) applies well to the horneros, given their territorial fidelity (Diniz et al. 2018; Amorim et al. 2023b). Our pDFA and DFA show a viable method for supervised classification. Spectro-temporal features of overlapping duet sections may accurately estimate new pair identities (67% pDFA, 87% DFA). These duet-based estimations of pair identities are within the range of other duetting species: crimson-breasted shrike (60%), California towhee (Pipilo crissalis) (91%), and cranes (G. japonensis: 91–96%; L. leucogeranus: 93%) (Benedict and McEntee 2008; Klenova et al. 2009b; van den Heuvel et al. 2013; Klenova et al. 2020). Additionally, restricting classifications to neighboring territories and using spectro-temporal features of non-overlapping syllables, along with temporal structure measurements of duet phrases, may improve accuracy in estimating individual identities within duets (pDFA-DFA, females: 54–76%, males: 63–85%).
In summary, our study demonstrates that rufous hornero duets carry information for conspecific discrimination, with a stronger emphasis on pair identity than individual contributors. Spectro-temporal attributes are key for pair and individual vocal signatures, whereas temporal structure of syllables within duet phrases distinguish sexes and enhance individual vocal signatures. The initial frequency of the duet has strong potential for encoding pair identity, which would enable rapid discrimination by receivers. Specific acoustic traits contributing to individual vocal signatures vary between sexes, which may facilitate simultaneous assessment by receivers. These classifications (pDFA and DFA) should aid in passive acoustic monitoring of hornero pairs and individuals. Further research is needed to experimentally test whether horneros use these duet traits to discriminate between pairs and individuals.
Supplementary Material
Supplementary material can be found at https://academic.oup.com/cz.
Acknowledgments
We thank Desirée Ramos, Isadora Ribeiro, and Pedro de Siracusa for their contributions to obtaining duet recordings in Brasília. We are grateful to Carlos de Melo, Fernando Almeida, Indra Rosendo, Luiza Carvalho, Pietra Guimarães, Renan Ramos, and Renato Oliveira for helping in acoustic data annotation. We thank Roger Mundry for providing the R function used to perform the pDFAs.
Contributor Information
Pedro Diniz, Instituto de Ciências Biológicas, Programa de Pós-Graduação em Ecologia, Universidade de Brasília, Brasília, DF 70910-900, Brazil; Departamento de Zoologia, Instituto de Ciências Biológicas, Universidade de Brasília, Brasília, DF 70910-900, Brazil.
Edvaldo F Silva-Jr, Departamento de Zoologia, Instituto de Ciências Biológicas, Universidade de Brasília, Brasília, DF 70910-900, Brazil.
Gianlucca S Rech, Departamento de Zoologia, Instituto de Ciências Biológicas, Universidade de Brasília, Brasília, DF 70910-900, Brazil.
Pedro H L Ribeiro, Departamento de Zoologia, Instituto de Ciências Biológicas, Universidade de Brasília, Brasília, DF 70910-900, Brazil.
André C Guaraldo, Instituto de Ciências Biológicas, Programa de Pós-Graduação em Biodiversidade e Conservação da Natureza, Universidade Federal de Juiz de Fora, Juiz de Fora, MG 36036-900, Brazil; Laboratório de Ecologia Comportamental e Ornitologia, Departamento de Zoologia, Universidade Federal do Paraná, Curitiba, PR 80210-170, Brazil.
Regina H Macedo, Departamento de Zoologia, Instituto de Ciências Biológicas, Universidade de Brasília, Brasília, DF 70910-900, Brazil.
Paulo S Amorim, Instituto de Ciências Biológicas, Programa de Pós-Graduação em Biodiversidade e Conservação da Natureza, Universidade Federal de Juiz de Fora, Juiz de Fora, MG 36036-900, Brazil.
Authors’ Contributions
P.D.: Conceptualization; Methodology; Investigation; Data curation; Validation; Formal analysis; Visualization; Writing—original draft; Writing—review & editing; Project administration; Funding acquisition. E.F.S.: Methodology, Investigation, Data curation, Writing—review & editing; G.S.R.: Methodology, Investigation, Data curation, Writing—review & editing; P.H.L.R.: Methodology, Investigation, Data curation, Writing—review & editing; A.C.G.: Methodology; Resources; Writing—review & editing; Supervision. R.H.M.: Resources; Writing—review & editing; Supervision; Funding acquisition. P.S.A.: Conceptualization; Methodology; Investigation; Data curation; Writing—review & editing; Funding acquisition.
Funding
P.D. and P.S.A. received Ph.D. scholarships from Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) (Finance Code 001). P.D. received a Postdoctoral fellowship from CAPES (grant number: 88887.469218/2019–00). R.H.M. received a fellowship from Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) for the duration of the study. Funding was also provided by Animal Behavior Society [ABS Student Research Grant to P.S.A.], Association of Field Ornithologists [E. Alexander Bergstrom Memorial Research Award to P.S.A.], and American Ornithological Society [Postdoctoral Research Award to P.D.). We acknowledge the logistic and financial support provided by Programa de Pós-Graduação em Ecologia from Universidade de Brasília in association with Programa de Excelência Acadêmica PROEX/CAPES (1789/2015); and the financial support provided by CNPq (471945/2013-7).
Conflict of Interest
The authors declare that they have no conflicts of interest.
Ethics Statement
This study used a dataset of rufous hornero duet recordings made in 2 populations (Brasília and Juiz de Fora). The dataset for Brasília was acquired following the approval of the Brazilian environmental agencies Instituto Chico Mendes de Conservação da Biodiversidade (ICMBio, license number 40806–1) and Centro Nacional de Pesquisa para Conservação das Aves Silvestres (CEMAVE, license number 3886). The dataset for Juiz was acquired following the local ethics and animal care committee (CEUA-UFJF: 027/2019-CEUA) and the Brazilian environmental agencies (ICMBio, license number 69700; CEMAVE, license number 4426). We adhered to international animal research guidelines (Association for the Study of Animal Behaviour and Animal Behavior Society). Experimental trials featured one song (< 13 s) per trial, spaced 24 hours apart, following Diniz et al. (2020, 2021) and Amorim et al. (2022, 2023). Birds resumed normal activities within minutes post-playback; no bird abandoned its territory after recording.
Data Availability
Data and code for reproducing the statistical analyses conducted in this study are available on Mendeley Data at the following link: https://dx.doi.org/10.17632/fpmmftpsy2.1
References
- Acero-Murcia AC, Amaral FR, Barros FC, Ribeiro TS, Miyaki CY. et al. , 2021. Ecological and evolutionary drivers of geographic variation in songs of a Neotropical suboscine bird: The Drab-breasted Bamboo Tyrant (Hemitriccus diops, Rhynchocyclidae). Ornithology 138:1–15. [Google Scholar]
- Adi K, Johnson MT, Osiejuk TS, 2010. Acoustic censusing using automatic vocalization classification and identity recognition. J Acoust Soc Am 127:874–883. [DOI] [PubMed] [Google Scholar]
- Akçay C, Swift RJ, Reed VA, Dickinson JL, 2013. Vocal kin recognition in kin neighborhoods of western bluebirds. Behav Ecol 24:898–905. [Google Scholar]
- Akçay C, Wood WE, Searcy WA, Templeton CN, Campbell SE. et al. , 2009. Good neighbour, bad neighbour: song sparrows retaliate against aggressive rivals. Anim Behav 78:97–102. [Google Scholar]
- Amador A, Mindlin GB, 2023. The dynamics behind diversity in suboscine songs. J Exp Biol 226:jeb227975. [DOI] [PubMed] [Google Scholar]
- Amador A, Trevisan M, Mindlin G, 2005. Simple neural substrate predicts complex rhythmic structure in duetting birds. Phys Rev E 72:1–7. [DOI] [PubMed] [Google Scholar]
- Amorim PS, Diniz P, Rossi MF, Guaraldo AC, 2022. Out of sight, out of mind: dear enemy effect in the rufous hornero, Furnarius rufus. Anim Behav 187:167–176. [Google Scholar]
- Amorim PS, Guaraldo AC, Diniz P, 2023a. Horneros consider their neighbors as precious foes regardless of territory size and human disturbance. Behav Processes 212:104942. [DOI] [PubMed] [Google Scholar]
- Amorim PS, Guaraldo AC, Diniz P, 2024. Consistent dear enemy effect despite variation in territorial centrality and population density in Rufous Horneros. Emu 124:252–260. [Google Scholar]
- Amorim PS, Guaraldo AC, Rossi MF, Diniz P, 2023b. Home range, territory, and partner replacement in the Rufous Hornero Furnarius rufus. Acta Ornithol 58:55–63. [Google Scholar]
- Araya-Salas M, Smith-Vidaurre G, 2017. warbleR: an r package to streamline analysis of animal acoustic signals. Methods Ecol Evol 8:184–191. [Google Scholar]
- Baker MC, 2004. The chorus song of cooperatively breeding Laughing kookaburras (Coraciiformes, Halcyonidae: Dacelo novaeguineae): characterization and comparison among groups. Ethology 110:21–35. [Google Scholar]
- Battiston MM, Wilson DR, Graham BA, Kovach KA, Mennill DJ, 2015. Rufous-and-white wrens Thryophilus rufalbus do not exhibit dear enemy effects towards conspecific or heterospecific competitors. Curr Zool 61:23–33. [Google Scholar]
- Bedoya CL, Molles LE, 2021. Acoustic censusing and individual identification of birds in the wild. bioRxiv:1–8. doi: https://doi.org/ 10.1101/2021.10.29.466450 [DOI] [Google Scholar]
- Bee MA, 2008. Finding a mate at a cocktail party: spatial release from masking improves acoustic mate recognition in grey treefrogs. Anim Behav 75:1781–1791. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beecher MD, 1982. Signature systems and kin recognition. Am Zool 22:477–490. [Google Scholar]
- Benedict L, McEntee JP, 2008. Context, structural variability and distinctiveness of California towhee (Pipilo crissalis) vocal duets. Ibis 115:77–86. [Google Scholar]
- Blackburn G, Ridley AR, Dutour M, 2023. Australian Magpies discriminate between the territorial calls of intra- and extra-group conspecifics. Ibis 165:1016–1021. [Google Scholar]
- Boeckle M, Bugnyar T, 2012. Long-term memory for affiliates in ravens. Curr Biol 22:801–806. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bradley DW, Molles LE, Waas JR, 2013. Local–foreign dialect discrimination and responses to mixed-dialect duets in the North Island kōkako. Behav Ecol 24:570–578. [Google Scholar]
- Bragina EV, Beme IR, 2010. Siberian crane duet as an individual signature of a pair: Comparison of visual and statistical classification techniques. Acta Ethol 13:39–48. [Google Scholar]
- Brecht KF, Nieder A, 2020. Parting self from others: Individual and self-recognition in birds. Neurosci Biobehav Rev 116:99–108. [DOI] [PubMed] [Google Scholar]
- Briefer E, Rybak F, Aubin T, 2008. When to be a dear enemy: flexible acoustic relationships of neighbouring skylarks, Alauda arvensis. Anim Behav 76:1319–1325. [Google Scholar]
- Budde C, 2001. Individual features in the calls of the Grey Crowned Crane, Balearica regulorum gibbericeps. Ostrich 72:134–139. [Google Scholar]
- Budka M, Osiejuk TS, 2013. Neighbour–stranger call discrimination in a nocturnal rail species, the Corncrake Crex crex. J Ornithol 154:685–694. [Google Scholar]
- Budka M, Osiejuk TS, 2014. Individually specific call feature is not used to neighbour-stranger discrimination: the corncrake case. PLoS One 9:e104031. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Budka M, Wojas L, Osiejuk TS, 2015. Is it possible to acoustically identify individuals within a population? J Ornithol 156:481–488. [Google Scholar]
- Calcari C, Pilenga C, Baciadonna L, Gamba M, Favaro L, 2021. Long-term stability of vocal individuality cues in a territorial and monogamous seabird. Anim Cogn 24:1165–1169. [DOI] [PubMed] [Google Scholar]
- Capelli D, Batalha-Filho H, Japyassú HF, 2020. Song variation in the Caatinga suboscine Silvery-cheeked Antshrike, (Sakesphorus cristatus) suggests latitude and São Francisco River as drivers of geographic variation. J Ornithol 161:873–884. [Google Scholar]
- Carlson NV, Kelly EM, Couzin I, 2020. Individual vocal recognition across taxa: a review of the literature and a look into the future. Philos Trans R Soc London Ser B 375:20190479. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Charrier I, Mathevon N, Jouventin P, 2003. Vocal signature recognition of mothers by fur seal pups. Anim Behav 65:543–550. [Google Scholar]
- Chelysheva EV, Klenova AV, Volodin IA, Volodina EV, 2023. Advertising sex and individual identity by long-distance chirps in wild-living mature cheetahs (Acinonyx jubatus). Ethology 129:288–300. [Google Scholar]
- Chen G, Xia C, Zhang Y, 2020. Individual identification of birds with complex songs: The case of green-backed flycatchers Ficedula elisae. Behav Processes 173:104063. [DOI] [PubMed] [Google Scholar]
- Christensen C, Radford AN, 2018. Dear enemies or nasty neighbors? Causes and consequences of variation in the responses of group-living species to territorial intrusions. Behav Ecol 29:1004–1013. [Google Scholar]
- Clark JA, Leung J, 2011. Vocal distinctiveness and information coding in a suboscine with multiple song types: eastern wood-pewee. Wilson J Ornithology 123:835–840. [Google Scholar]
- Clink DJ, Tasirin JS, Klinck H, 2020. Vocal individuality and rhythm in male and female duet contributions of a nonhuman primate. Curr Zool 66:173–186. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Clink DJ, Zafar M, Ahmad AH, Lau AR, 2021. Limited evidence for individual signatures or site-level patterns of variation in male Northern Gray Gibbon (Hylobates funereus) duet codas. Int J Primatol 42:896–914. [Google Scholar]
- Colegrave N, Ruxton GD, 2018. Using biological insight and pragmatism when thinking about pseudoreplication. Trends Ecol Evol 33:28–35. [DOI] [PubMed] [Google Scholar]
- Dahlin CR, Benedict L, 2014. Angry birds need not apply: A perspective on the flexible form and multifunctionality of avian vocal duets. Ethology 120:1–10. [Google Scholar]
- Diniz P, Macedo RH, Webster MS, 2019. Duetting correlates with territory quality and reproductive success in a suboscine bird with low extra-pair paternity. Auk 136:1–13. [Google Scholar]
- Diniz P, Ramos DM, Webster MS, Macedo RH, 2021. Rufous horneros perceive and alter temporal coordination of duets during territorial interactions. Anim Behav 174:175–185. [Google Scholar]
- Diniz P, Rech GS, Ribeiro PHL, Webster MS, Macedo RH, 2020. Partners coordinate territorial defense against simulated intruders in a duetting ovenbird. Ecol Evol 10:81–92. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Diniz P, Silva EF Jr, Webster MS, Macedo RH, 2018. Duetting behavior in a Neotropical ovenbird: Sexual and seasonal variation and adaptive signaling functions. J Avian Biol 49:jav-01637. [Google Scholar]
- Feng J-J, Cui L-W, Ma C-Y, Fei H-L, Fan P-F, 2014. Individuality and stability in male songs of cao vit gibbons (Nomascus nasutus) with potential to monitor population dynamics. PLoS One 9:e96317. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Foote JR, Palazzi E, Mennill DJ, 2013. Songs of the Eastern Phoebe, a suboscine songbird, are individually distinctive but do not vary geographically. Bioacoustics 22:137–151. [Google Scholar]
- Gémard C, Aubin T, Bonadonna F, 2019. Males’ calls carry information about individual identity and morphological characteristics of the caller in burrowing petrels. J Avian Biol 50:jav.02270. [Google Scholar]
- Guggenberger M, Adreani NM, Foerster K, Kleindorfer S, 2022. Vocal recognition of distance calls in a group-living basal bird: The greylag goose, Anser anser. Anim Behav 186:107–119. [Google Scholar]
- Hahn AH, Krysler A, Sturdy CB, 2013. Female song in black-capped chickadees (Poecile atricapillus): Acoustic song features that contain individual identity information and sex differences. Behav Processes 98:98–105. [DOI] [PubMed] [Google Scholar]
- Hall ML, 2004. A review of hypotheses for the functions of avian duetting. Behav Ecol Sociobiol 55:415–430. [Google Scholar]
- Hall ML, 2009. A review of vocal duetting in birds. Adv Study Behav 40:67–121. [Google Scholar]
- K. Lisa Yang Center for Conservation Bioacoustics. 2022. Raven Pro: Interactive Sound Analysis Software (Version 1.6.) [Computer software]. Ithaca, NY: The Cornell Lab of Ornithology. Available from http://www.birds.cornell.edu/raven [Google Scholar]
- Keenan S, Mathevon N, Stevens JM, Nicolè F, Zuberbühler K. et al. , 2020. The reliability of individual vocal signature varies across the bonobo’s graded repertoire. Anim Behav 169:9–21. [Google Scholar]
- Kennedy RA, Evans CS, McDonald PG, 2009. Individual distinctiveness in the mobbing call of a cooperative bird, the noisy miner Manorina melanocephala. J Avian Biol 40:481–490. [Google Scholar]
- Klenova AV, Goncharova MV, Kashentseva TA, 2020. Long-term stability in the vocal duets of the endangered Siberian Crane Leucogeranus leucogeranus. Polar Biol 43:813–823. [Google Scholar]
- Klenova AV, Volodin IA, Volodina EV, 2009b. Examination of pair-duet stability to promote long-term monitoring of the endangered red-crowned crane (Grus japonensis). J Ethol 27:401–406. [Google Scholar]
- Klenova AV, Volodin IA, Volodina EV, 2009a. The variation in reliability of individual vocal signature throughout ontogenesis in the red-crowned crane Grus japonensis. Acta Ethol 12:29–36. [Google Scholar]
- Kuhn M, 2008. Building predictive models in r using the caret package. J Stat Softw 28:1–26.27774042 [Google Scholar]
- Laje R, Mindlin G, 2003. Highly structured duets in the song of the south american hornero. Phys Rev Lett 91:1–4. [DOI] [PubMed] [Google Scholar]
- Lambrechts MM, Dhondt AA, 1995. Individual voice discrimination in birds. In: Power DM, editor. Current Ornithology. Boston: Springer, 115–139 [Google Scholar]
- Langmore NE, 1998. Functions of duet and solo songs of female birds. Trends Ecol Evol 13:136–140. [DOI] [PubMed] [Google Scholar]
- Lau AR, Zafar M, Ahmad AH, Clink DJ, 2022. Investigating temporal coordination in the duet contributions of a pair-living small ape. Behav Ecol Sociobiol 76:91. [Google Scholar]
- Lehongre K, Aubin T, Robin S, Del Negro C, 2008. Individual signature in canary songs: Contribution of multiple levels of song structure. Ethology 114:425–435. [Google Scholar]
- Linhart P, Osiejuk TS, Budka M, Šálek M, Špinka M. et al. , 2019. Measuring individual identity information in animal signals: Overview and performance of available identity metrics. Methods Ecol Evol 10:1558–1570. [Google Scholar]
- Lovell SF, Lein MR, 2005. Individual recognition of neighbors by song in a suboscine bird, the alder flycatcher Empidonax alnorum. Behav Ecol Sociobiol 57:623–630. [Google Scholar]
- Lyon RH, Ordubadi A, 1982. Use of cepstra in acoustical signal analysis. J Mech Des 104:303–306. [Google Scholar]
- Maldonado-Coelho M, dos Santos SS, Isler ML, Svensson-Coelho M, Sotelo-Muñoz M. et al. , 2023. Evolutionary and ecological processes underlying geographic variation in innate bird songs. Am Nat 202:E31–E52. [DOI] [PubMed] [Google Scholar]
- Masco C, Allesina S, Mennill DJ, Pruett-Jones S, 2015. The Song Overlap Null model Generator (SONG): A new tool for distinguishing between random and non-random song overlap. Bioacoustics 25:29–40. [Google Scholar]
- McGregor PK, Butlin RK, Guilford T, Krebs JR, 1997. Signalling in territorial systems: A context for individual identification, ranging and eavesdropping. Philos Trans R Soc Lond B Biol Sci 340:237–244. [Google Scholar]
- Milani JC, 1985. Responses of gibbons (Hylobates muelleri) to self, neighbor, and stranger song duets. Int J Primatol 6:193–200. [Google Scholar]
- Mundry R, Sommer C, 2007. Discriminant function analysis with nonindependent data: Consequences and an alternative. Anim Behav 74:965–976. [Google Scholar]
- Odom KJ, Cain KE, Hall ML, Langmore NE, Mulder RA. et al. , 2021. Sex role similarity and sexual selection predict male and female song elaboration and dimorphism in fairy-wrens. Ecol Evol 11:17901–17919. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Odom KJ, Slaght JC, Gutiérrez RJ, 2013. Distinctiveness in the territorial calls of great horned owls within and among years. J Raptor Res 47:21–30. [Google Scholar]
- Oksanen J, Blanchet FG, Friendly M, Kindt R, Legendre P. et al. , 2020. vegan: Community Ecology Package. R package version 2.5-7. https://cran.r-project.org/web/packages/vegan/index.html
- Osiejuk TS, Żbikowski B, Wheeldon A, Budka M, 2019. Hey mister Tambourine Dove, sing a song for me: Simple but individually specific songs of Turtur tympanistria from Cameroon. Avian Res 10:14. [Google Scholar]
- Průchová A, Jaška P, Linhart P, 2017. Cues to individual identity in songs of songbirds: Testing general song characteristics in Chiffchaffs Phylloscopus collybita. J Ornithol 158:911–924. [Google Scholar]
- R Core Team, 2023. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Available from: https://www.R-project.org/ [Google Scholar]
- Radford AN, 2005. Group-specific vocal signatures and neighbour–stranger discrimination in the cooperatively breeding green woodhoopoe. Anim Behav 70:1227–1234. [Google Scholar]
- Ręk P, Magrath RD, 2020. Visual displays enhance vocal duet production and the perception of coordination despite spatial separation of partners. Anim Behav 168:231–241. [Google Scholar]
- Reus K, Soma M, Anichini M, Gamba M, Kloots MH. et al. , 2021. Rhythm in dyadic interactions. Philos Trans R Soc B Biol Sci 376:20200337. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Robisson P, Aubin T, Bremond J-C, 1993. Individuality in the voice of the emperor penguin Aptenodytes forsteri: Adaptation to a noisy environment. Ethology 94:279–290. [Google Scholar]
- Roper JJ, 2005. Sexually distinct songs in the duet of the sexually monomorphic Rufous Hornero. J Field Ornithol 76:234–236. [Google Scholar]
- Sibiryakova OV, Volodin IA, Matrosova VA, Volodina EV, Garcia AJ. et al. , 2015. The power of oral and nasal calls to discriminate individual mothers and offspring in red deer, Cervus elaphus. Front Zool 12:2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith-Vidaurre G, Perez-Marrufo V, Wright TF, 2021. Individual vocal signatures show reduced complexity following invasion. Anim Behav 179:15–39. [Google Scholar]
- Spezie G, Torti V, Bonadonna G, de Gregorio C, Valente D. et al. , 2023. Evidence for acoustic discrimination in lemurs: A playback study on wild indris Indri indri. Curr Zool 69:41–49. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stowell D, Petrusková T, Šálek M, Linhart P, 2019. Automatic acoustic identification of individuals in multiple species: Improving identification across recording conditions. J R Soc Interface 16:20180940. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Suthers RA, 1994. Variable asymmetry and resonance in the avian vocal tract: A structural basis for individually distinct vocalizations. J Comp Physiol A 175:457–466. [DOI] [PubMed] [Google Scholar]
- Ten Cate C, 2021. Re-evaluating vocal production learning in non-oscine birds. Philos Trans R Soc London Ser B 376:20200249. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Terry AM, Peake TM, McGregor PK, 2005. The role of vocal individuality in conservation. Front Zool 2:10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tobias JA, Sheard C, Seddon N, Meade A, Cotton AJ. et al. , 2016. Territoriality, social bonds, and the evolution of communal signaling in birds. Front Ecol Evol 4:1–15. [Google Scholar]
- van den Heuvel IM, Cherry MI, Klump GM, 2013. Individual identity, song repertoire and duet function in the Crimson-breasted Shrike (Laniarius atrococcineus). Bioacoustics 22:1–15. [Google Scholar]
- Vignal C, Mathevon N, Mottin S, 2004. Audience drives male songbird response to partner’s voice. Nature 430:448–451. [DOI] [PubMed] [Google Scholar]
- Villain AS, Mahamoud-Issa M, Doligez B, Vignal C, 2017. Vocal behaviour of mates at the nest in the White-throated Dipper Cinclus cinclus: Contexts and structure of vocal interactions, pair-specific acoustic signature. J Ornithol 158:897–910. [Google Scholar]
- Volodin IA, Volodina EV, Klenova AV, Matrosova VA, 2015. Gender identification using acoustic analysis in birds without external sexual dimorphism. Avian Res 6:20. [Google Scholar]
- Wiley RH, Wiley MS, 1977. Recognition of neighbors’ duets by stripe-backed wrens Campylorhynchus nuchalis. Behaviour 62:10–34. [Google Scholar]
- Williams BK, Titus K, 1988. Assessment of sampling stability in ecological applications of discriminant analysis. Ecology 69:1275–1285. [Google Scholar]
- Wirthlin M, Chang EF, Knörnschild M, Krubitzer LA, Mello CV. et al. , 2019. A modular approach to vocal learning: Disentangling the diversity of a complex behavioral trait. Neuron 104:87–99. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wyman MT, Walkenhorst B, Manser MB, 2022. Selection levels on vocal individuality: Strategic use or byproduct. Curr Opin Behav Sci 46:101140. [Google Scholar]
- Xia C, Lin X, Liu W, Lloyd H, Zhang Y, 2012. Acoustic identification of individuals within large avian populations: A case study of the Brownish-flanked Bush Warbler, South-Central China. PLoS One 7:e42528–e42528. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Data and code for reproducing the statistical analyses conducted in this study are available on Mendeley Data at the following link: https://dx.doi.org/10.17632/fpmmftpsy2.1




