Skip to main content
Biology Letters logoLink to Biology Letters
. 2019 Dec 4;15(12):20190555. doi: 10.1098/rsbl.2019.0555

Dogs perceive and spontaneously normalize formant-related speaker and vowel differences in human speech sounds

Holly Root-Gutteridge 1,, Victoria F Ratcliffe 2, Anna T Korzeniowska 1, David Reby 1,3
PMCID: PMC6936018  PMID: 31795850

Abstract

Domesticated animals have been shown to recognize basic phonemic information from human speech sounds and to recognize familiar speakers from their voices. However, whether animals can spontaneously identify words across unfamiliar speakers (speaker normalization) or spontaneously discriminate between unfamiliar speakers across words remains to be investigated. Here, we assessed these abilities in domestic dogs using the habituation–dishabituation paradigm. We found that while dogs habituated to the presentation of a series of different short words from the same unfamiliar speaker, they significantly dishabituated to the presentation of a novel word from a new speaker of the same gender. This suggests that dogs spontaneously categorized the initial speaker across different words. Conversely, dogs who habituated to the same short word produced by different speakers of the same gender significantly dishabituated to a novel word, suggesting that they had spontaneously categorized the word across different speakers. Our results indicate that the ability to spontaneously recognize both the same phonemes across different speakers, and cues to identity across speech utterances from unfamiliar speakers, is present in domestic dogs and thus not a uniquely human trait.

Keywords: speaker normalization, vowel perception, speaker discrimination, speech perception

1. Background

Speech sounds vary among speakers owing to differences in body size, age, gender and other idiosyncratic attributes [1,2], and thus effective speech perception relies on a listener's ability to recognize phonemes independent of such speaker variability, a perceptual mechanism known as speaker normalization [3]. In human speech, vowels are represented by specific formant frequency patterns, but the absolute values of the formants vary across speakers owing to size-, age- or other individual differences in vocal tract length [4,5]. Yet these speaker-related differences in formant values encode socially relevant indexical and identity cues across phonemes [6,7]. Thus, human listeners must normalize these two dimensions of speech variation to recognize words across different speakers and to identify individual speakers across different words, an ability that was once posited to be uniquely human [8]. Although some non-human animals can be trained to recognize phonemes across speakers and have also been shown to recognize familiar humans from their voices (review: [8]), both the extent to which animals can spontaneously perform speaker normalization to recognize words across unfamiliar speakers and their ability to spontaneously discriminate between unfamiliar speakers across speech sounds remain to be investigated.

Here, we use domestic dogs (Canis familiaris) to investigate these abilities in a non-human mammal that is regularly exposed to human speech utterances that function as interspecific signals. Indeed, dogs are known to recognize basic phonemic information, for example, when following commands (even in the absence of tonal cues [9,10]), and can recognize familiar human voices speaking known phrases [11,12]. However, in order to recognize words across speakers, dogs must attend to the relative positions of formants in human speech rather to than their absolute values by normalizing variation in the acoustic signal that is related to speaker identity or gender [13]. Moreover, to discriminate between unfamiliar speakers, dogs must also be able to attend to these same speaker cues across different phonemes. As performing one task could preclude the other, we investigated whether dogs would spontaneously normalize variation in human speech to recognize words across speakers, and speakers across words, using the habituation–dishabituation paradigm. This paradigm has been used widely in perceptual studies involving animal or non-verbal participants [1416], and has been used previously to explore dogs’ ability to discriminate conspecific barks produced by different individuals [17].

To investigate dogs’ ability to spontaneously discriminate between unfamiliar speakers, we tested whether dogs would habituate to a short series of different single syllable words [i.e. H-vowel-D] that varied only in the vowel and were produced by the same unfamiliar speaker, then dishabituate to the presentation of a new [H-vowel-D] word from a different speaker, then re-habituate to a final novel [H-vowel-D] word from the original speaker (electronic supplementary material, figure S1A). We predicted that if the dogs spontaneously categorized the identity of the initial speaker across words and recognized a change in speaker, then they would show a longer response to the dishabituation stimulus word than to the final habituation or re-habituation stimuli words.

Next, we investigated dogs' ability to spontaneously normalize voice differences across speakers in order to discriminate between phonemes. We exposed them to four examples of the same word produced by four unfamiliar, same-gender speakers, then introduced a new speaker producing a new word (electronic supplementary material, figure S1B). We predicted that if the dogs spontaneously categorized the word produced by the different speakers, then they would show an increase in response duration to the dishabituation stimulus, demonstrating that they recognized the change in word and had spontaneously normalized production across speakers.

2. Methods and materials

Voices from 13 adult men and 14 adult women who were not familiar to the dogs were sampled with a randomized presentation of voices across conditions. We used four habituation, one dishabituation and one re-habituation sound stimulus trials with 6 s of silence between each audio stimulus presentation [18]. Speaker identity and order of presentation of vowels were all pseudo-randomized across stimuli. For further details, see electronic supplementary material, Methods.

For trials in condition 1 (speaker discrimination), the discrimination of unfamiliar voices was tested with sequences using the voices of four unfamiliar speakers who produced monosyllabic words. Each stimulus word started with ‘h’ and ended in ‘d’, following [19], and included one of nine vowel-sounds: ‘had’, ‘head’, ‘heard’, ‘heed’, ‘hid’, ‘hod’, ‘hood’, ‘whod’ and ‘hud’. In condition 2 (speaker normalization), the discrimination of the vowels [a], [i] and [o] was tested using ‘had’, ‘hid’ and ‘whod’. These vowels were chosen and paired so as to be clearly distinct from one another and difficult for dogs to confuse. In both conditions, half of the stimulus sequences involved female voices and the other half involved male voices. While these short words may be familiar to dogs, they are not typically used in commands in the English language.

A total of 70 dogs participated in the between-subject design study. Each dog heard six sounds, with 24 dogs retained in each of the two conditions (see electronic supplementary material for demographic details). Videos were assessed before coding and discarded if the dog either did not visibly respond to the stimulus by moving any part of their face or body including their eyes (n = 4 dogs) or was distracted during trials by non-stimulus sounds or events (n = 18). The stimuli were presented from an Apple iBook Air through a Behringer Europort MPA40BT-PRO speaker that was set to conversational volume (approx. 65 dB) and placed on one side of the dog, counterbalanced across subjects. The dogs' reactions were filmed on a Sony FDR-AX100 camcorder positioned on a tripod. Duration was measured as the time between the initial onset of response (e.g. looking, ears moving into forward position, eyes looking in direction of the speaker, head turning or moving towards the speaker), until the dog stopped visibly responding or the beginning of the next trial. All above-mentioned responses were coded as ‘change in behaviour’. Lack of response was coded as duration equals zero. All videos were coded blind in Sportscode Gamebreaker 11 (Sportstec, Warriewood, NSW, Australia) by H.R.G. with 25% double-coded blind by A.T.K. (see electronic supplementary material for details).

Statistical tests were performed in SPSS v. 25 (SPSS Inc., Chicago, IL., USA). Linear mixed effect models (LMEs) fitted with restricted-maximum-likelihood estimation were used to examine the effect of trial on listener response duration. Dog identity was included as a random effect and fixed effects included trial, dog sex, age in years, breed-group, recording location and speaker-gender, with significance threshold calculated at p < 0.007 using Bonferroni to correct for multiple comparisons. The variables met LME assumptions and residuals were normal as indicated by Shapiro–Wilks tests.

3. Results

Duration of the dogs' responses in each trial was not significantly different across conditions (F1,187.5 = 5.961, p = 0.016, with corrected threshold of p = 0.007). For both conditions, only the habituation trial factor had a significant effect on response duration, while there were no other significant fixed effects (p > 0.05 for all other variables, see electronic supplementary material for details).

The LME results were similar for both conditions: habituation trial had a significant effect on response duration (condition 1, speaker discrimination: F5,115 = 4.271, p = 0.001; condition 2, speaker normalization: F5,115 = 5.421, p < 0.001). Response duration decreased in both conditions from habituation trial 1 to trial 4 (condition 1: p = 0.047; figure 1a; condition 2: p = 0.001, figure 1b), showing that dogs habituated to the stimuli over time.

Figure 1.

Figure 1.

Boxplots of duration of response to stimulus sounds for (a) condition 1: speaker discrimination (n = 24 dogs), and (b) condition 2: speaker normalization (n = 24 dogs). P-values < 0.05 marked by *, p < 0.01 marked by **, p < 0.001 marked by ***, and outliers are marked by circles. H, habituation trial; DH, dishabituation trial; RH, re-habituation trial.

For both conditions, dogs’ response durations increased significantly for the dishabituation trial compared to final habituation trial 4 (condition 1: p = 0.007, condition 2: p = 0.001) and the re-habituation trial (condition 1: p = 0.001, condition 2: p < 0.001), showing that they dishabituated to the change in stimulus and re-habituated to the repeated stimulus. Response duration in the re-habituation trial was not significantly different from the final habituation trial 4 (condition 1: p = 0.413, condition 2: p = 0.778), while the dishabituation trial response duration was not significantly different from habituation trial 1 (condition 1: p = 0.467; condition 2: p = 0.953). Thus, the duration of dogs' responses to the dishabituation trial was similar to that of their original response to the first stimulus.

These results show that dogs habituated to the same speaker producing four different words dishabituated to a new speaker producing a new word (figure 1a). This demonstrates that dogs can spontaneously categorize short words as belonging to the same unfamiliar speaker based on the presentation of a very limited set of four stimuli, and are thus able to detect a change in speaker identity when a new speaker produces a new word that was not used in the habituation sequence. Conversely, dogs habituated to the same word spoken by four different speakers of the same gender and then dishabituated to a new word spoken by a new speaker that differed only in its vowel, demonstrating that dogs detected a change in the vowel sound, which can only be achieved by categorizing the vowels as similar in the habituation sequence, despite speaker differences in formant frequencies (figure 1b).

4. Discussion

Our results provide the first demonstration that spontaneous speaker normalization is not unique to humans, as we show that domestic dogs can spontaneously discriminate the same words across speakers. We also show that dogs are capable of spontaneously discriminating between unfamiliar speakers of the same gender across different words, suggesting that they have the ability to extract identity information from unfamiliar human voices on the basis of very little acoustic exposure. As interindividual differences in pitch were removed from vocal stimuli, dogs could only discriminate the speakers based on filter-related cues common to the different vowels, and/or on subtle idiosyncratic information encoded in the surrounding consonants.

Previous work on speaker normalization in non-human animals has relied on training the animal to give a behavioural cue when they have successfully discriminated (for a review, see [20]). Our work builds on that of Baru [21], who trained dogs to discriminate between synthesized vowels [a] and [i] through recognizing formants as patterns, and responding by lifting a corresponding paw. However, as Baru's result used only synthesized voices and required the dogs to participate in up to 400 conditioning/reinforcement trials with negative reinforcement electric shocks to achieve accuracy, this level of discrimination was unlikely to represent a spontaneous ability in dogs [21]. Other experiments using natural voices have demonstrated that such diverse species as zebra finches (Taeniopygia guttata) [22] and chinchillas (Chinchilla lanigera) [23], among others, can be trained to normalize speaker differences to discriminate vowels. However, these studies too do not represent spontaneous responses as the research paradigms likewise relied on trained behaviours to indicate discrimination. Here, we measured spontaneous responses to natural voice stimuli in a habituation–dishabituation experiment and found that dogs did not require special or extensive training to spontaneously normalize speakers and vowels.

Speech perception depends on the ability to parse relatively small differences in sounds and recognize these as meaningful [24]. Originally, it was believed that speech production and speech perception were inextricably linked abilities, and that perception required the brain to create a mental model of the articulatory gestures that produced the speech to recognize and categorize the sounds [24]. This ‘motor theory’ posited that speech perception was unique to modern humans, as earlier hominins and other animals could not articulate their vocal apparatus to produce speech sounds and therefore could not make the mental connection between articulatory motions and the perceived sounds [25,26]. However, Kuhl & Miller [23] hypothesized that the two mechanisms of production and perception are in fact separate, and, furthermore, suggested that speech perception may at least be partly independent of speech production. This was based on evidence that the ability to perceive speech sound differences is present in both very young human infants (less than 1 month old) and also non-human animals including chinchillas, neither of which can produce normal speech sounds [23,27]. Thus, their ‘general auditory ability’ hypothesis decoupled perception from production and suggested that humans have evolved speech that can exploit existing perceptual categories rather than originating new abilities [23,27]. Because dogs are not capable of speech production, our result that dogs can normalize speaker differences to categorize vowels from formants lends some support to this theory, suggesting that the ability to perform speaker normalization may be a latent ancestral trait. However, as dogs have undergone a long period of domestication of at least 13 000 years [28], it is possible that these normalization abilities result from artificial selection by humans for dogs that were more responsive to human vocal cues. Testing speaker normalization abilities in captive grey wolves (Canis lupus) that do not share the same domestication history may help to clarify this point.

We also show that dogs can spontaneously discriminate between unfamiliar human voices, even when the words spoken are not meaningful to the dogs, on the basis of very limited exposure to just four words. This builds on previous results for familiar voice recognition by both dogs [12] and cats [29,30]. Further investigations could establish which aspects of the human voice are most important for the dogs' perception of speaker identity, and the effects that changing language, pitch or other forms of speech modulation have on dogs’ perceptions of speaker identity. It is known that wolves can recognize familiar conspecifics from their howls [16] and that dogs can recognize familiar humans by their speech [12], but it has not yet been established if this cross-species ability was present in wolves or was specifically selected for during the domestication process.

In conclusion, dogs were found to spontaneously discriminate between both phonemic and identity cues in human speech. Dogs normalized differences in vocal production between same-gender speakers to recognize vowels and they could also use these differences to help to discriminate between unfamiliar speakers within genders. Thus, spontaneous speaker normalization to recognize vowels from formant patterns is not a uniquely human trait.

Supplementary Material

ESM Extended Methods
rsbl20190555supp1.docx (41.5KB, docx)

Supplementary Material

ESM Figure 1: Habituation-dishabituation paradigm
rsbl20190555supp2.docx (15.2MB, docx)

Supplementary Material

ESM Table 3 LME results for speaker discrimination
rsbl20190555supp3.docx (13.9KB, docx)

Supplementary Material

ESM Table 4 LME results for speaker normalisation
rsbl20190555supp4.docx (13.9KB, docx)

Supplementary Material

VTL Calculator
rsbl20190555supp5.xlsx (14KB, xlsx)

Acknowledgements

We thank Raystede Centre for Animal Welfare, RSPCA Mount Noddy Animal Centre, and all the dog owners for their assistance during testing. We also thank Harriet Grace, Imogen Fallon, Josephine McCartney, Sandra Bendoriute, Alice Keable, Jemma Forman and Louise Brown for their assistance in data collection. We are grateful to Katarzyna Pisanski, Livio Favaro and Matilde Massenet for commenting on earlier versions of this manuscript.

Ethics

The study complied with the internal University of Sussex regulations on the use of animals and was approved by the University of Sussex Ethical Review Committee (Approval no. ARG/04/04). Approval to record human voices to be used as stimuli was also obtained from the University of Sussex Life Sciences & Psychology Cluster based Research Ethics Committee (Approval nos. DRVR0312 and ER/HR236/1).

Data accessibility

The data are provided to Dryad: Root-Gutteridge, Holly; Ratcliffe, Victoria; Korzeniowska, Anna T.; Reby, David (2019), Data from: Dogs perceive and spontaneously normalize formant-related speaker and vowel differences in human speech sounds, v3, Dryad, Dataset, https://datadryad.org/stash/share/YzNBMfeEVUXBC0mGK5VcoeNZH44NgJeFbLmtfJqZ-7M [31].

Authors' contributions

H.R.-G. participated in study design, carried out the stimuli preparation, data collection, analysis, statistical analysis and drafted the manuscript; A.T.K. participated in data collection and analysis, and edited the manuscript; V.F.R. and D.R. conceived of the study, designed the study, coordinated the study and edited the manuscript. All authors gave final approval for publication and agreed to be held accountable for the content therein and approve the final version of the manuscript.

Competing interests

We declare we have no competing interests.

Funding

This study was funded by Biotechnology and Biological Sciences Research Council (BBSRC grant no. BB/P00170X/1 ‘How Dogs Hear Us’). Professor Reby was also supported by the University of Lyon IDEXLYON project as part of the ‘Programme Investissements d'avenir’ (ANR-16-IDEX-0005).

References

  • 1.Titze IR. 1989. Physiologic and acoustic differences between male and female voices. J. Acoust. Soc. Am. 85, 1699–1707. ( 10.1121/1.397959) [DOI] [PubMed] [Google Scholar]
  • 2.Fitch WT, Hauser MD. 2003. Unpacking ‘honesty’: vertebrate vocal production and evolution of acoustic signals. Acoust. Commun. 16, 65–137. ( 10.1007/0-387-22762-8_3) [DOI] [Google Scholar]
  • 3.Kuhl PK. 1983. Perception of auditory equivalence classes for speech in early infancy. Infant Behav. Dev. 6, 263–285. ( 10.1016/S0163-6383(83)80036-8) [DOI] [Google Scholar]
  • 4.Fitch WT, Giedd J. 1999. Morphology and development of the human vocal tract: a study using magnetic resonance imaging. J. Acoust. Soc. Am. 106, 1511–1522. ( 10.1121/1.427148) [DOI] [PubMed] [Google Scholar]
  • 5.Childers DG, Wu K. 1991. Gender recognition from speech. Part II: fine analysis. J. Acoust. Soc. Am. 90, 1841–1856. ( 10.1121/1.401664) [DOI] [PubMed] [Google Scholar]
  • 6.Kreiman J, Sidtis D.. 2011. Recognizing speaker identity from voice: theoretical and ethological perspectives and a psychological model. In Foundations of voice studies: an interdisciplinary approach to voice production and perception, pp. 156–188. Malden, MA: Wiley-Blackwell. [Google Scholar]
  • 7.Owren MJ, Cardillo GC. 2006. The relative roles of vowels and consonants in discriminating talker identity versus word meaning. J. Acoust. Soc. Am. 119, 1727–1739. ( 10.1121/1.2161431) [DOI] [PubMed] [Google Scholar]
  • 8.Liberman AM. 1982. On finding that speech is special. Am. Psychol. 37, 107–144. ( 10.1037/0003-066X.37.2.148) [DOI] [Google Scholar]
  • 9.Fukuzawa M, Mills DS, Cooper JJ. 2005. More than just a word: non-semantic command variables affect obedience in the domestic dog (Canis familiaris). Appl. Anim. Behav. Sci. 91, 129–141. ( 10.1016/j.applanim.2004.08.025) [DOI] [Google Scholar]
  • 10.Fukuzawa M, Mills DS, Cooper JJ. 2005. The effect of human command phonetic characteristics on auditory cognition in dogs (Canis familiaris). J. Comp. Psychol. 119, 117–120. ( 10.1037/0735-7036.119.1.117) [DOI] [PubMed] [Google Scholar]
  • 11.Coutellier L. 2006. Are dogs able to recognize their handler's voice? A preliminary study. Anthrozoos 19, 278–284. ( 10.2752/089279306785415529) [DOI] [Google Scholar]
  • 12.Adachi I, Kuwahata H, Fujita K. 2007. Dogs recall their owner's face upon hearing the owner's voice. Anim. Cogn. 10, 17–21. ( 10.1007/s10071-006-0025-8) [DOI] [PubMed] [Google Scholar]
  • 13.Bachorowski J-A, Owren MJ. 1999. Acoustic correlates of talker sex and individual talker identity are present in a short vowel segment produced in running speech. J. Acoust. Soc. Am. 106, 1054–1063. ( 10.1121/1.427115) [DOI] [PubMed] [Google Scholar]
  • 14.Reby D, Hewison M, Izquierdo M, Pépin D. 2001. Red deer (Cervus elaphus) hinds discriminate between the roars of their current harem holder stag and those of neighbouring stags. Ethology 959, 951–960. ( 10.1046/j.1439-0310.2001.00732.x) [DOI] [Google Scholar]
  • 15.Charlton BD, Ellis WAH, Larkin R, Tecumseh Fitch W. 2012. Perception of size-related formant information in male koalas (Phascolarctos cinereus). Anim. Cogn. 15, 999–1006. ( 10.1007/s10071-012-0527-5) [DOI] [PubMed] [Google Scholar]
  • 16.Font E, Carazo P, Márquez R, Palacios V, Font E, Marquez R, Carazo P. 2015. Recognition of familiarity on the basis of howls: a playback experiment in a captive group of wolves. Behaviour 152, 593–614. ( 10.1163/1568539x-00003244) [DOI] [Google Scholar]
  • 17.Molnár C, Pongrácz P, Faragó T, Dóka A, Miklósi Á. 2009. Dogs discriminate between barks: the effect of context and identity of the caller. Behav. Processes 82, 198–201. ( 10.1016/j.beproc.2009.06.011) [DOI] [PubMed] [Google Scholar]
  • 18.Charlton BD, Reby D, McComb K. 2007. Female red deer prefer the roars of larger males. Biol. Lett. 3, 382–385. ( 10.1098/rsbl.2007.0244) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Peterson GEG, Barney HLH. 1952. Control methods used in a study of the vowels. J. Acoust. Soc. Am. 24, 175–184. ( 10.1121/1.1906875) [DOI] [Google Scholar]
  • 20.Kriengwatana B, Escudero P, ten Cate C. 2015. Revisiting vocal perception in non-human animals: a review of vowel discrimination, speaker voice recognition, and speaker normalization. Front. Psychol. 5, 1543–1556. ( 10.3389/fpsyg.2014.01543) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Baru AV. 1975. Discrimination of synthesized vowels [a] and [i] with varying parameters (fundamental frequency, intensity, duration and number of formants) in dog. In Auditory analysis and perception of speech (eds Fant G, Tatham MAA), pp. 91–101. London, UK: Academic Press Ltd. [Google Scholar]
  • 22.Ohms VR, Gill A, van Heijningen CAA, Beckers GJL, ten Cate C. 2010. Zebra finches exhibit speaker-independent phonetic perception of human speech. Proc. R. Soc. B 277, 1003–1009. ( 10.1098/rspb.2009.1788) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Kuhl PK, Miller JD. 1975. Speech perception by the chinchilla: voiced-voiceless distinction in alveolar plosive consonants. Science 190, 69–72. ( 10.1126/science.1166301) [DOI] [PubMed] [Google Scholar]
  • 24.Liberman AM, Cooper FS, Shankweiler DP, Studdert-Kennedy M. 1967. Perception of the speech code. Psychol. Rev. 74, 431–461. ( 10.1037/h0020279) [DOI] [PubMed] [Google Scholar]
  • 25.Mattingly IG. 1972. Speech cues and sign stimuli. Am. Sci. 60, 327–337. [PubMed] [Google Scholar]
  • 26.Liberman AM, Mattingly IG. 1985. The motor theory of speech perception revised. Cognition 21, 1–36. ( 10.1016/0010-0277(85)90021-6) [DOI] [PubMed] [Google Scholar]
  • 27.Kuhl PK. 1988. Auditory perception and the evolution of speech. Hum. Evol. 3, 19–43. ( 10.1007/BF02436589) [DOI] [Google Scholar]
  • 28.Thalmann O, et al. 2013. Complete mitochondrial genomes of ancient canids suggest a European origin of domestic dogs. Science 342, 871–874. ( 10.1126/science.1243650) [DOI] [PubMed] [Google Scholar]
  • 29.Saito A, Shinozuka K. 2013. Vocal recognition of owners by domestic cats (Felis catus). Anim. Cogn. 16, 685–690. ( 10.1007/s10071-013-0620-4) [DOI] [PubMed] [Google Scholar]
  • 30.Takagi S, Arahori M, Chijiiwa H, Saito A, Kuroshima H, Fujita K. 2019. Cats match voice and face: cross-modal representation of humans in cats (Felis catus). Anim. Cogn. 22, 901–906. ( 10.1007/s10071-019-01265-2) [DOI] [PubMed] [Google Scholar]
  • 31.Root-Gutteridge H, Ratcliffe VF, Korzeniowska AT, Reby D. 2019. Data from: Dogs perceive and spontaneously normalize formant-related speaker and vowel differences in human speech sounds Dryad Digital Repository. (https://datadryad.org/stash/share/YzNBMfeEVUXBC0mGK5VcoeNZH44NgJeFbLmtfJqZ-7M) [DOI] [PMC free article] [PubMed]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Citations

  1. Root-Gutteridge H, Ratcliffe VF, Korzeniowska AT, Reby D. 2019. Data from: Dogs perceive and spontaneously normalize formant-related speaker and vowel differences in human speech sounds Dryad Digital Repository. (https://datadryad.org/stash/share/YzNBMfeEVUXBC0mGK5VcoeNZH44NgJeFbLmtfJqZ-7M) [DOI] [PMC free article] [PubMed]

Supplementary Materials

ESM Extended Methods
rsbl20190555supp1.docx (41.5KB, docx)
ESM Figure 1: Habituation-dishabituation paradigm
rsbl20190555supp2.docx (15.2MB, docx)
ESM Table 3 LME results for speaker discrimination
rsbl20190555supp3.docx (13.9KB, docx)
ESM Table 4 LME results for speaker normalisation
rsbl20190555supp4.docx (13.9KB, docx)
VTL Calculator
rsbl20190555supp5.xlsx (14KB, xlsx)

Data Availability Statement

The data are provided to Dryad: Root-Gutteridge, Holly; Ratcliffe, Victoria; Korzeniowska, Anna T.; Reby, David (2019), Data from: Dogs perceive and spontaneously normalize formant-related speaker and vowel differences in human speech sounds, v3, Dryad, Dataset, https://datadryad.org/stash/share/YzNBMfeEVUXBC0mGK5VcoeNZH44NgJeFbLmtfJqZ-7M [31].


Articles from Biology Letters are provided here courtesy of The Royal Society

RESOURCES