Skip to main content
Biology Letters logoLink to Biology Letters
. 2022 Jun 29;18(6):20220098. doi: 10.1098/rsbl.2022.0098

Cross-modal facilitation of auditory discrimination in a frog

Logan S James 1,2,, A Leonie Baier 1,2, Rachel A Page 2, Paul Clements 3, Kimberly L Hunter 4,, Ryan C Taylor 2,4,, Michael J Ryan 1,2,
PMCID: PMC9240679  PMID: 35765810

Abstract

Stimulation in one sensory modality can affect perception in a separate modality, resulting in diverse effects including illusions in humans. This can also result in cross-modal facilitation, a process where sensory performance in one modality is improved by stimulation in another modality. For instance, a simple sound can improve performance in a visual task in both humans and cats. However, the range of contexts and underlying mechanisms that evoke such facilitation effects remain poorly understood. Here, we demonstrated cross-modal stimulation in wild-caught túngara frogs, a species with well-studied acoustic preferences in females. We first identified that a combined visual and seismic cue (vocal sac movement and water ripple) was behaviourally relevant for females choosing between two courtship calls in a phonotaxis assay. We then found that this combined cross-modal stimulus rescued a species-typical acoustic preference in the presence of background noise that otherwise abolished the preference. These results highlight how cross-modal stimulation can prime attention in receivers to improve performance during decision-making. With this, we provide the foundation for future work uncovering the processes and conditions that promote cross-modal facilitation effects.

Keywords: comparative psychology, multi-modal integration, multi-modal communication, túngara frogs, multi-sensory processing

1. Introduction

Sensory perceptions in one modality are routinely impacted by stimulation in other modalities [1,2]. For instance, interactions between vision and hearing create many illusions in humans [39]. The sources of cross-modal interactions are myriad, including interactions in primary sensory processes as well as higher level cognitive processes [3]. While most research has focused on visual/auditory interactions, these effects likely occur across all sensory modalities. For instance, tactile stimulation can affect visual perception, and odours can affect tactile perception [10,11].

Stimulation in one modality can improve performance in a separate modality. For instance, visual input improves noisy speech comprehension [12,13]. In addition, hearing the word for an object can allow participants to visually detect an otherwise unseen object [14]. However, the cross-modal stimulus need not always be so relevant. Simple auditory stimulation can reveal otherwise unseen images when temporally and spatially aligned to the image presentation, and vice versa [15,16]. Even a non-spatial auditory ‘pip’ can improve performance on a visual search task [17,18] and a light flash can improve detection of low-intensity sounds [19]. Here, we refer to the process in which stimulation in one modality improves performance in a separate modality as ‘cross-modal facilitation’, a term that has been used to describe a variety of processes, often akin to this phenomenon [2,2022]. Many aspects of when and how cross-modal facilitation occurs remain unexplored.

Investigations in non-human animals have revealed similar cross-modal impacts on perception and identified some of the neuronal processes responsible. Pioneering studies in owls and cats revealed extensive brain areas that respond to and integrate stimuli from multiple modalities [2326]. Cats demonstrate a behavioural result similar to humans, in which a spatially and temporally aligned auditory stimulus enhances performance on a visual detection task [27,28]. However, little is known about the range of conditions and species where cross-modal facilitation occurs, particularly using naturalistic stimuli and non-domestic animals.

The túngara frog offers an excellent system to study cross-modal facilitation. In this species, groups of males call from shallow pools at night to attract female frogs, which also attracts predators and parasites [2931]. Males can produce simple calls consisting of a downward sweeping whine, or complex calls which consist of a whine followed by one or more short chucks. Across decades of two-choice phonotaxis experiments, wild-caught females have shown consistent preferences for a speaker playing a complex call over a simple call [32]. To produce these calls, males inflate and deflate a large vocal sac, creating a temporally aligned visual cue as well as a water surface ripple ‘seismic’ cue [33]. These additional cues can be used by female frogs [3437], generally enhancing preference. Integration of the acoustic and visual components can also occur nonlinearly and create emergent percepts [36,38,39]. However, how these stimuli may promote cross-modal facilitation remains largely unknown.

Here, we demonstrated that cross-modal facilitation could improve performance of a biologically relevant auditory choice task in the túngara frog. We first investigated what type of cross-modal stimulation was behaviourally relevant for female frogs. We then found that, when we used acoustic noise to abolish the preference for a complex call, cross-modal facilitation restored the acoustic preference.

2. Methods

(a) . Animals

We collected pairs of túngara frogs (Engystomops (=Physalaemus) pustulosus) from ephemeral pools in and around Gamboa, Panamá shortly after sunset between September and December 2021. Phonotaxis experiments were conducted with females in a laboratory at the Smithsonian Tropical Research Institute. Frogs were acclimated to darkness in a cooler for at least 30 min prior to testing, and toe clipped following testing to ensure that frogs were not recaptured and tested again. All procedures were approved by the University of Texas at Austin (IACUC: AUP-2019-00067), STRI (IACUC: 2018-0411-2021) and the Ministry of the Environment of Panamá (MiAmbiente: SE/A-40-19). We used 38 frogs in experiment 1, and 50 frogs in experiment 2. Following testing, all frogs were returned to the site of capture within 24 h.

(b) . Apparatus

Experiments were conducted in a wading pool inside a dimly lit acoustic chamber (figure 1a). For acoustic stimulation, we placed speakers in holes cut in the side of the pool, directly above the water line. For visual stimulation, we attached three-dimensional-printed model frogs (RoboFrogs [34]) in front of each speaker (figure 1b,c). The RoboFrogs housed a silicon vocal sac replica that we dynamically inflated simultaneously with the acoustic call for trials with visual stimulation. For seismic stimulation, we placed custom-built ripple generators on elevated platforms in front of each speaker. The generator rested just below the water surface and created a ripple simultaneously with the acoustic call for trials with seismic stimulation. Finally, a pair of speakers on the wall above and behind the call speakers were used for continuous playback of green noise stimuli on trials with noise. See electronic supplementary material for additional details on the materials and methods used.

Figure 1.

Figure 1.

(a) Experimental set-up. (b) RoboFrog with an inflated silicone vocal sac. (c) The RoboFrog controller. See methods and electronic supplementary material for additional details.

(c) . Experimental stimuli

For experiment 1, the speakers alternated broadcasting the same complex call stimulus (whine chuck) on a 1.2 s loop. Each female was tested in three conditions: visual, seismic and visual + seismic. In each condition, the call from one speaker (randomly assigned each trial) was paired with the dynamically inflating vocal sac of a RoboFrog (visual), the generation of a ripple (seismic) or both (visual + seismic).

For experiment 2, speakers again alternated broadcasting call stimuli on a 1.2 s loop. One speaker (randomly assigned each trial) played a simple call (whine) and the other played a complex call (whine chuck). The whine portion of both stimuli was identical. Each female was tested in four conditions: acoustic, trimodal, acoustic + noise and trimodal + noise. In the acoustic condition, no other stimuli were presented besides the calls. In the trimodal condition, the calls from both speakers were paired with a simultaneous visual and seismic cue. These two conditions were repeated in the presence of continuous playback of green noise for the acoustic + noise and trimodal + noise conditions. The duration of both the visual and seismic stimuli was based on a simple call (whine) and was identical regardless of the acoustic stimulus with which it was paired (i.e. the visual and seismic stimuli gave no indication of what acoustic stimulus played from the speaker).

In both experiments, the order of conditions was randomly assigned to each female.

(d) . Phonotaxis testing

We conducted phonotaxis experiments using standard protocols in this system [32,35]. In brief, each female was exposed to that trial's stimuli from the starting platform for 2 min before the restraining cage was lifted. Experiments were monitored live and terminated when the female made a choice by remaining in the ‘choice zone’ near one speaker for at least 4 s or failed to choose after 10 min elapsed (foul out). Foul out trials were removed from analysis. She was then placed back on the starting platform and the next trial was started. The latency to respond was recorded (we did not detect any significant differences in latency across conditions; see electronic supplementary material).

(e) . Statistics

All analyses were conducted in R. For each condition, comparisons to chance were conducted using binomial tests. Because the same females were used across conditions, we compared conditions using generalized linear mixed effects models (GLMMs) with frog ID as a random effect and a binomial link function using the ‘lme4’ package [40].

3. Results

(a) . Experiment 1

We used a two-choice phonotaxis protocol in a large pool of water where both speakers broadcasted the same complex call, and we randomly paired one speaker with a dynamic visual, seismic or combined stimulus on each trial (see Methods). By presenting each cue in isolation and combined, we discovered that only the combined stimulus with both the visual and seismic cue evoked a preference reliably different from chance (binomial test: 25 out of 35; p = 0.0167; figure 2a).

Figure 2.

Figure 2.

(a) A bar graph from experiment 1 demonstrating that the combined visual and seismic stimulus causes the largest effect. The y-axis indicates the percentage of females that chose the multi-sensory stimulus over the unisensory stimulus. For all conditions, an identical whine chuck (complex call) was played from both speakers. (b) A bar graph from experiment 2 depicting significant cross-modal facilitation. The y-axis indicates the percentage of females that chose a whine chuck (complex call) over a whine (simple call). For all conditions, the only difference between the choices was in the acoustic modality. For (a,b), error bars indicate 95% confidence intervals from binomial tests, with darker bars and asterisks above bars indicating conditions significantly different from chance (50% dashed line). Asterisks with horizontal lines indicate significant differences between conditions from GLMMs. Numbers at the bottom of each bar indicate the sample size.

(b) . Experiment 2

We next asked whether cross-modal stimulation could enhance auditory discrimination. We capitalized on the reliable and natural acoustic preference in túngara frogs for a complex call over a simple call [32], which we reproduced in an aquatic arena for the first time (binomial test: 38 out of 48; p < 0.0001; figure 2b). Next, we paired both speakers (one broadcasting a simple call, the other broadcasting a complex call) with the same combined dynamic visual and seismic cues, simultaneously with the acoustic stimuli, and found that preferences remained stable and high (binomial test: 40 out of 50; p < 0.0001; figure 2b). Note that the visual and seismic cues were identical at both speakers, providing no information to the females about what acoustic stimulus was playing from each speaker. Given the results of Experiment 1, we used only the combined stimulus to ensure that the stimulus was perceptible and behaviourally relevant for the female frogs. Then, we added green background noise (see electronic supplementary material) at a volume equal to the call stimuli at the starting platform, which was sufficient to abolish the acoustic preference for a complex call (binomial test: 26 out of 50; p < 0.8877; figure 2b). Finally, we found that adding the cross-modal stimuli to both speakers in the presence of noise was sufficient to rescue the preference (binomial test: 36 out of 48; p = 0.0007; GLMM: p < 0.02 for all pairwise comparisons with noise only condition; figure 2b).

4. Discussion

Here, we demonstrated cross-modal facilitation in a frog, where the presence of visual and seismic stimuli rescued an auditory preference in the presence of noise. This finding is reminiscent of cross-modal facilitation in other domains, particularly the finding that visual cues improve noisy speech comprehension [12,13]. However, it is important to note that in the current study, the identical visual and seismic stimuli were present at both speakers, and thus could not bias decision-making on their own. Rather, the mere presence of these additional stimuli caused female frogs to act on the acoustic differences between two stimuli, despite the presence of noise that previously abolished the acoustic preference.

The processes governing cross-modal facilitation remain poorly understood and are likely complex. The related concept of multi-sensory integration, which can enhance overall performance, has received extensive theorizing [1,41,42], but how stimuli can improve discrimination or detection in a separate modality has received considerably less attention. From human studies, it has been hypothesized that hearing a word primes subjects to expect particular shapes, and thus participants can more easily identify those objects in a visual detection task [14]. Cross-modal facilitation could also occur from one stimulus improving attention to another modality. For instance, an acoustic cue will improve visual detection in humans and cats, but only when spatially and temporally aligned to the visual stimulus [15,20,27,28], and an acoustic ‘pip’ can cause a temporally aligned visual stimulus to ‘pop’ out [17,18,21]. In particular, cross-modal input may improve selective attention for the especially relevant aspects of stimuli in another modality [43]. More generally, any cue that provides temporal or spatial information to a receiver can help unmask stimuli from noise. We hypothesize that these processes occurred in our experiments with túngara frogs, where cross-modal stimuli prime females to temporal and spatial aspects of the acoustic stimuli. Indeed, previous research on multi-sensory preferences in female túngara frogs has found that the temporal and spatial alignment of the visual and acoustic stimuli are important for whether females prefer or even recognize the visual stimulus [38,44,45].

Within the vertebrate brain, the optic tectum (OT; superior colliculus in mammals) is a region that has been identified as a key area for multi-sensory integration in multiple taxa as well as for goal-oriented movement [23,25,4651]. Electrophysiological results in the superior colliculus of cats closely match behavioural responses during cross-modal facilitation when auditory cues improve visual detection [27,47,52], suggesting that the OT could also be important for the cross-modal facilitation we observed in a frog. Indeed, other species of frogs with ablations to the OT fail to respond to relevant visual motion [53,54]. These results highlight that cross-modal effects can appear without a mammalian cortex. Given that multi-sensory integration occurs in invertebrates, we believe that cross-modal facilitation effects are likely, indicating that other, radically different, neural architectures can produce such effects [5557].

Our results have important implications for sexual signalling, mate choice and multi-sensory processing in frogs. Multi-sensory integration of acoustic and visual components has been shown across numerous frog species [5864]. The current study indicates a novel importance for water surface ripples in mate choice. In addition, our data suggest that cross-modal facilitation can serve to maintain species-typical preferences for complex calls in noisy conditions, an important task for many species to solve [65]. Frog-eating bats also attend to all three components tested here [6668], and future work will be essential to understanding how cross-modal effects might impact predation risk and calling behaviour.

Overall, we demonstrate that visual and seismic stimuli can cause cross-modal facilitation in a naturalistic auditory choice task. This is a special case of the general theory that multi-sensory signalling leads to enhanced performance in animal communication [69]. However, overstimulation in one modality can also reduce performance in another modality, leading to cognitive overload [70], a process well understood by drivers who turn the radio down when parking the car. Understanding where this line is between enhanced performance and cognitive overload, as well as how this line varies across different receivers, provide intriguing avenues for future inquiry.

Acknowledgements

We thank Gregg Cohen for invaluable logistical assistance as well as Jorge López and Luke Larter for help collecting frogs and conducting experiments. Olivia Rose Hamilton and Savi Made helped with equipment. We thank the Smithsonian Tropical Research Institute for equipment and support.

Ethics

All procedures were approved by the University of Texas at Austin (IACUC: AUP-2019-00067), STRI (IACUC: 2018-0411-2021) and the Ministry of the Environment of Panamá (MiAmbiente: SE/A-40-19).

Data accessibility

Data used for analysis and example code are available in the electronic supplementary material [71].

Authors' contributions

L.S.J.: conceptualization, formal analysis, investigation, methodology, visualization, writing—original draft and writing—review and editing; A.L.B.: conceptualization, methodology and writing—review and editing; R.A.P.: conceptualization, funding acquisition, methodology, resources, supervision and writing—review and editing; P.C.: methodology, resources and writing—review and editing; K.L.H.: conceptualization, funding acquisition, methodology, resources, supervision and writing—review and editing; R.C.T.: conceptualization, funding acquisition, methodology, resources, supervision and writing—review and editing; M.J.R.: conceptualization, funding acquisition, methodology, supervision and writing—review and editing.

All authors gave final approval for publication and agreed to be held accountable for the work performed therein.

Conflict of interest declaration

We declare we have no competing interest.

Funding

The research was funded through a grant from the National Science Foundation (IOS-1914646). A.L.B. was supported by the Alexander von Humboldt Foundation (Feodor Lynen Research Fellowship).

References

  • 1.Stein B. 2012. The new handbook of multisensory processing. Cambridge, UK: MIT Press. [Google Scholar]
  • 2.Todd JW. 1912. Reaction to multiple stimuli. New York, NY: The Science Press. [Google Scholar]
  • 3.Shams L, Kim R. 2010. Crossmodal influences on visual perception. Phys. Life Rev. 7, 269-284. ( 10.1016/j.plrev.2010.04.006) [DOI] [PubMed] [Google Scholar]
  • 4.Grove PM, Robertson C, Harris LR. 2016. Disambiguating the stream/bounce illusion with inference. Multisens. Res. 29, 453-464. ( 10.1163/22134808-00002524) [DOI] [PubMed] [Google Scholar]
  • 5.Driver J. 1996. Enhancement of selective listening by illusory mislocation of speech sounds due to lip-reading. Nature 381, 66-68. ( 10.1038/381066a0) [DOI] [PubMed] [Google Scholar]
  • 6.Kobayashi M, Osada Y, Kashino M. 2007. The effect of a flashing visual stimulus on the auditory continuity illusion. Percept. Psychophys. 69, 393-399. ( 10.3758/BF03193760) [DOI] [PubMed] [Google Scholar]
  • 7.Shams L, Kamitani Y, Shimojo S. 2002. Visual illusion induced by sound. Cogn. Brain Res. 14, 147-152. ( 10.1016/S0926-6410(02)00069-1) [DOI] [PubMed] [Google Scholar]
  • 8.Alais D, Burr D. 2004. The ventriloquist effect results from near-optimal bimodal integration. Curr. Biol. 14, 257-262. ( 10.1016/j.cub.2004.01.029) [DOI] [PubMed] [Google Scholar]
  • 9.McGurk H, MacDonald J. 1976. Hearing lips and seeing voices. Nature 264, 746-748. ( 10.1038/264746a0) [DOI] [PubMed] [Google Scholar]
  • 10.Demattè ML, Sanabria D, Sugarman R, Spence C. 2006. Cross-modal interactions between olfaction and touch. Chem. Senses 31, 291-300. ( 10.1093/chemse/bjj031) [DOI] [PubMed] [Google Scholar]
  • 11.Shimojo S, Miyauchi S, Hikosaka O. 1997. Visual motion sensation yielded by non-visually driven attention. Vision Res. 37, 1575-1580. ( 10.1016/S0042-6989(96)00313-6) [DOI] [PubMed] [Google Scholar]
  • 12.McGettigan C, Faulkner A, Altarelli I, Obleser J, Baverstock H, Scott SK. 2012. Speech comprehension aided by multiple modalities: behavioural and neural interactions. Neuropsychologia 50, 762-776. ( 10.1016/j.neuropsychologia.2012.01.010) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Drijvers L, Özyürek A. 2017. Visual context enhanced: the joint contribution of iconic gestures and visible speech to degraded speech comprehension. J. Speech Lang. Hear. Res. 60, 212-222. ( 10.1044/2016_JSLHR-H-16-0101) [DOI] [PubMed] [Google Scholar]
  • 14.Lupyan G, Ward EJ. 2013. Language can boost otherwise unseen objects into visual awareness. Proc. Natl Acad. Sci. USA 110, 14 196-14 201. ( 10.1073/pnas.1303312110) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Bolognini N, Frassinetti F, Serino A, Làdavas E. 2005. ‘Acoustical vision’ of below threshold stimuli: interaction among spatially converging audiovisual inputs. Exp. Brain Res. 160, 273-282. ( 10.1007/s00221-004-2005-z) [DOI] [PubMed] [Google Scholar]
  • 16.Bolognini N, Leo F, Passamonti C, Stein BE, Làdavas E. 2007. Multisensory-mediated auditory localization. Perception 36, 1477-1485. ( 10.1068/p5846) [DOI] [PubMed] [Google Scholar]
  • 17.Van der Burg E, Olivers CNL, Bronkhorst AW, Theeuwes J. 2008. Pip and pop: nonspatial auditory signals improve spatial visual search. J. Exp. Psychol. Hum. Percept. Perform. 34, 1053-1065. ( 10.1037/0096-1523.34.5.1053) [DOI] [PubMed] [Google Scholar]
  • 18.Gao M, Chang R, Wang A, Zhang M, Cheng Z, Li Q, Tang X. 2021. Which can explain the pip-and-pop effect during a visual search: multisensory integration or the oddball effect? J. Exp. Psychol. Hum. Percept. Perform. 47, 689-703. ( 10.1037/xhp0000905) [DOI] [PubMed] [Google Scholar]
  • 19.Lovelace CT, Stein BE, Wallace MT. 2003. An irrelevant light enhances auditory detection in humans: a psychophysical analysis of multisensory integration in stimulus detection. Cogn. Brain Res. 17, 447-453. ( 10.1016/S0926-6410(03)00160-5) [DOI] [PubMed] [Google Scholar]
  • 20.Keefe JM, Pokta E, Störmer VS. 2021. Cross-modal orienting of exogenous attention results in visual–cortical facilitation, not suppression. Sci. Rep. 11, 1-11. ( 10.1038/s41598-021-89654-x) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Arieh Y, Marks LE. 2008. Cross-modal interaction between vision and hearing: a speed–accuracy analysis. Percept. Psychophys. 70, 412-421. ( 10.3758/PP.70.3.412) [DOI] [PubMed] [Google Scholar]
  • 22.Foxton JM, Riviere LD, Barone P. 2010. Cross-modal facilitation in speech prosody. Cognition 115, 71-78. ( 10.1016/j.cognition.2009.11.009) [DOI] [PubMed] [Google Scholar]
  • 23.Knudsen EI. 1982. Auditory and visual maps of space in the optic tectum of the owl. J. Neurosci. 2, 1177-1194. ( 10.1523/jneurosci.02-09-01177.1982) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Stein BE, Meredith MA. 1993. The merging of the senses. Cambridge, UK: The MIT Press. [Google Scholar]
  • 25.Knudsen EI, Brainard MS. 1995. Creating a unified representation of visual and auditory space in the brain. Annu. Rev. Neurosci. 18, 19-43. ( 10.1146/annurev.ne.18.030195.000315) [DOI] [PubMed] [Google Scholar]
  • 26.Stein BE, Stanford TR, Rowland BA. 2014. Development of multisensory integration from the perspective of the individual neuron. Nat. Rev. Neurosci. 15, 520-535. ( 10.1038/nrn3742) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Stein BE, Meredith MA, Huneycutt WS, McDade L. 1989. Behavioral indices of multisensory integration: orientation to visual cues is affected by auditory stimuli. J. Cogn. Neurosci. 1, 12-24. ( 10.1162/jocn.1989.1.1.12) [DOI] [PubMed] [Google Scholar]
  • 28.Bean NL, Stein BE, Rowland BA. 2021. Stimulus value gates multisensory integration. Eur. J. Neurosci. 53, 3142-3159. ( 10.1111/ejn.15167) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Ryan MJ. 1985. The túngara frog—a study in sexual selection and communication. Chicago, IL: University of Chicago Press. [Google Scholar]
  • 30.Tuttle MD, Ryan MJ. 1981. Bat predation and the evolution of frog vocalizations in the neotropics. Science 214, 677-678. ( 10.1126/science.214.4521.677) [DOI] [PubMed] [Google Scholar]
  • 31.Bernal XE, de Silva P. 2015. Cues used in host-seeking behavior by frog-biting midges (Corethrella spp. Coquillet). J. Vector Ecol. 40, 122-128. ( 10.1111/jvec.12140) [DOI] [PubMed] [Google Scholar]
  • 32.Ryan MJ, Akre KL, Baugh AT, Bernal XE, Lea AM, Leslie C, Still MB, Wylie DC, Rand AS. 2019. Nineteen years of consistently positive and strong female mate preferences despite individual variation. Am. Nat. 194, 125-134. ( 10.1086/704103) [DOI] [PubMed] [Google Scholar]
  • 33.James LS, Halfwerk W, Hunter KL, Page RA, Taylor RC, Wilson PS, Ryan MJ. 2021. Covariation among multimodal components in the courtship display of the túngara frog. J. Exp. Biol. 224, 1-10. ( 10.1242/jeb.241661) [DOI] [PubMed] [Google Scholar]
  • 34.Taylor RC, Klein BA, Stein J, Ryan MJ. 2008. Faux frogs: multimodal signalling and the value of robotics in animal behaviour. Anim. Behav. 76, 1089-1097. ( 10.1016/j.anbehav.2008.01.031) [DOI] [Google Scholar]
  • 35.Cronin AD, Ryan MJ, Page RA, Hunter KL, Taylor RC. 2019. Environmental heterogeneity alters mate choice behavior for multimodal signals. Behav. Ecol. Sociobiol. 73, 43. ( 10.1007/s00265-019-2654-3) [DOI] [Google Scholar]
  • 36.Stange N, Page RA, Ryan MJ, Taylor RC. 2017. Interactions between complex multisensory signal components result in unexpected mate choice responses. Anim. Behav. 134, 239-247. ( 10.1016/j.anbehav.2016.07.005) [DOI] [Google Scholar]
  • 37.Leslie CE, Rosencrans RF, Walkowski W, Gordon WC, Bazan NG, Ryan MJ, Farris HE. 2020. Reproductive state modulates retinal sensitivity to light in female túngara frogs. Front. Behav. Neurosci. 13, 1-13. ( 10.3389/fnbeh.2019.00293) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Taylor RC, Ryan MJ. 2013. Interactions of multisensory components perceptually rescue túngara frog mating signals. Science 341, 273-274. ( 10.1126/science.1237113) [DOI] [PubMed] [Google Scholar]
  • 39.Taylor RC, Wilhite KO, Ludovici RJ, Mitchell KM, Halfwerk W, Page RA, Ryan MJ, Hunter KL. 2021. Complex sensory environments alter mate choice outcomes. J. Exp. Biol. 224, 1-9. ( 10.1242/jeb.233288) [DOI] [PubMed] [Google Scholar]
  • 40.Bates D, Mächler M, Bolker B, Walker S. 2015. Fitting linear mixed-effects models using lme4. J. Stat. Soft 67(1). ( 10.18637/jss.v067.i01) [DOI] [Google Scholar]
  • 41.Partan S, Marler P. 1999. Communication goes multimodal. Science 283, 1272-1273. ( 10.1126/science.283.5406.1272) [DOI] [PubMed] [Google Scholar]
  • 42.Choi I, Lee JY, Lee SH. 2018. Bottom-up and top-down modulation of multisensory integration. Curr. Opin. Neurobiol. 52, 115-122. ( 10.1016/j.conb.2018.05.002) [DOI] [PubMed] [Google Scholar]
  • 43.Johnston WA, Dark VJ. 1986. Selective attention. Annu. Rev. Psychol. 37, 43-75. ( 10.1146/annurev.ps.37.020186.000355) [DOI] [Google Scholar]
  • 44.Taylor RC, Page RA, Klein BA, Ryan MJ, Hunter KL. 2017. Perceived synchrony of frog multimodal signal components is influenced by content and order. Integr. Comp. Biol. 57, 902-909. ( 10.1093/icb/icx027) [DOI] [PubMed] [Google Scholar]
  • 45.Taylor RC, Klein BA, Stein J, Ryan MJ. 2011. Multimodal signal variation in space and time: how important is matching a signal with its signaler? J. Exp. Biol. 214, 815-820. ( 10.1242/jeb.043638) [DOI] [PubMed] [Google Scholar]
  • 46.Winkowski DE, Knudsen EI. 2007. Top-down control of multimodal sensitivity in the barn owl optic tectum. J. Neurosci. 27, 13 279-13 291. ( 10.1523/JNEUROSCI.3937-07.2007) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Meredith MA, Nemitz JW, Stein BE. 1987. Determinants of multisensory integration in superior colliculus neurons. I. Temporal factors. J. Neurosci. 7, 3215-3229. ( 10.1523/jneurosci.07-10-03215.1987) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Northmore DPM. 2011. The optic tectum. In Encyclopedia of fish physiology: from genome to environment (ed. Farrell AP), pp. 131-142. Amsterdam, The Netherlands: Elsevier. [Google Scholar]
  • 49.Wilczynski W, Northcutt RG. 1977. Afferents to the optic tectum of the leopard frog: an HRP study. J. Comp. Neurol. 173, 219-229. ( 10.1002/cne.901730202) [DOI] [PubMed] [Google Scholar]
  • 50.Ingle D. 1970. Visuomotor functions of the frog optic tectum. Brain Behav. Evol. 3, 57-71. ( 10.1159/000125463) [DOI] [PubMed] [Google Scholar]
  • 51.Saitoh K, Ménard A, Grillner S. 2007. Tectal control of locomotion, steering, and eye movements in lamprey. J. Neurophysiol. 97, 3093-3108. ( 10.1152/jn.00639.2006) [DOI] [PubMed] [Google Scholar]
  • 52.Stein BE, Stanford TR, Rowland BA. 2009. The neural basis of multisensory integration in the midbrain: its organization and maturation. Hear. Res. 258, 4-15. ( 10.1016/j.heares.2009.03.012) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Ingle D. 1977. Detection of stationary objects by frogs (Rana pipiens) after ablation of optic tectum. J. Comp. Physiol. Psychol. 91, 1359-1364. ( 10.1037/h0077415) [DOI] [PubMed] [Google Scholar]
  • 54.Ingle D. 1973. Two visual systems in the frog. Science 181, 1053-1055. ( 10.1126/science.181.4104.1053) [DOI] [PubMed] [Google Scholar]
  • 55.Uetz GW, Roberts JA. 2002. Multisensory cues and multimodal communication in spiders: insights from video/audio playback studies. Brain. Behav. Evol. 59, 222-230. ( 10.1159/000064909) [DOI] [PubMed] [Google Scholar]
  • 56.Ohyama T, et al. 2015. A multilevel multimodal circuit enhances action selection in Drosophila. Nature 520, 633-639. ( 10.1038/nature14297) [DOI] [PubMed] [Google Scholar]
  • 57.Mongeau JM, Schweikert LE, Davis AL, Reichert MS, Kanwal JK. 2021. Multimodal integration across spatiotemporal scales to guide invertebrate locomotion. Integr. Comp. Biol. 61, 842-853. ( 10.1093/icb/icab041) [DOI] [PubMed] [Google Scholar]
  • 58.Preininger D, Boeckle M, Freudmann A, Starnberger I, Sztatecsny M, Hödl W. 2013. Multimodal signaling in the small torrent frog (Micrixalus saxicola) in a complex acoustic environment. Behav. Ecol. Sociobiol. 67, 1449-1456. ( 10.1007/s00265-013-1489-6) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Grafe TU, Preininger D, Sztatecsny M, Kasah R, Dehling JM, Proksch S, Hödl W. 2012. Multimodal communication in a noisy environment: a case study of the Bornean rock frog Staurois parvus. PLoS ONE 7, e37965. ( 10.1371/journal.pone.0037965) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Starnberger I, Preininger D, Hödl W. 2014. The anuran vocal sac: a tool for multimodal signalling. Anim. Behav. 97, 281-288. ( 10.1016/j.anbehav.2014.07.027) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Narins PM, Grabul DS, Soma KK, Gaucher P, Hodl W. 2005. Cross-modal integration in a dart-poison frog. Proc. Natl Acad. Sci. USA 102, 2425-2429. ( 10.1073/pnas.0406407102) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Preininger D, Boeckle M, Hödl W. 2009. Communication in noisy environments II: visual signaling behavior of male foot-flagging frogs Staurois latopalmatus. Herpetologica 65, 166-173. ( 10.1655/08-037R.1) [DOI] [Google Scholar]
  • 63.de Luna AG, Hödl W, Amézquita A. 2010. Colour, size and movement as visual subcomponents in multimodal communication by the frog Allobates femoralis. Anim. Behav. 79, 739-745. ( 10.1016/j.anbehav.2009.12.031) [DOI] [Google Scholar]
  • 64.Laird KL, Clements P, Hunter KL, Taylor RC. 2016. Multimodal signaling improves mating success in the green tree frog (Hyla cinerea), but may not help small males. Behav. Ecol. Sociobiol. 70, 1517-1525. ( 10.1007/s00265-016-2160-9) [DOI] [Google Scholar]
  • 65.Bee MA. 2015. Treefrogs as animal models for research on auditory scene analysis and the cocktail party problem. Int. J. Psychophysiol. 95, 216-237. ( 10.1016/j.ijpsycho.2014.01.004) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Gomes DGE, Page RA, Geipel I, Taylor RC, Ryan MJ, Halfwerk W. 2016. Bats perceptually weight prey cues across sensory systems when hunting in noise. Science 353, 1277-1280. ( 10.1126/science.aaf7934) [DOI] [PubMed] [Google Scholar]
  • 67.Halfwerk W, Jones PL, Taylor RC, Ryan MJ, Page RA. 2014. Risky ripples allow bats and frogs to eavesdrop on a multisensory sexual display. Science 343, 413-416. ( 10.1126/science.1244812) [DOI] [PubMed] [Google Scholar]
  • 68.Gomes DGE, Halfwerk W, Taylor RC, Ryan MJ, Page RA. 2017. Multimodal weighting differences by bats and their prey: probing natural selection pressures on sexually selected traits. Anim. Behav. 134, 99-102. ( 10.1016/j.anbehav.2017.10.011) [DOI] [Google Scholar]
  • 69.Higham JP, Hebets EA. 2013. An introduction to multimodal communication. Behav. Ecol. Sociobiol. 67, 1381-1388. ( 10.1007/s00265-013-1590-x) [DOI] [Google Scholar]
  • 70.Sandhu R, Dyson BJ. 2016. Cross-modal perceptual load: the impact of modality and individual differences. Exp. Brain Res. 234, 1279-1291. ( 10.1007/s00221-015-4517-0) [DOI] [PubMed] [Google Scholar]
  • 71.James LS, Baier AL, Page RA, Clements P, Hunter KL, Taylor RC, Ryan MJ. 2022. Cross-modal facilitation of auditory discrimination in a frog. FigShare. [DOI] [PMC free article] [PubMed]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Citations

  1. James LS, Baier AL, Page RA, Clements P, Hunter KL, Taylor RC, Ryan MJ. 2022. Cross-modal facilitation of auditory discrimination in a frog. FigShare. [DOI] [PMC free article] [PubMed]

Data Availability Statement

Data used for analysis and example code are available in the electronic supplementary material [71].


Articles from Biology Letters are provided here courtesy of The Royal Society

RESOURCES