Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2009 Aug 1.
Published in final edited form as: J Comp Psychol. 2008 Aug;122(3):274–282. doi: 10.1037/0735-7036.122.3.274

Analyzing Acoustic Interactions in Natural Bullfrog Choruses

Andrea Megela Simmons 1, James A Simmons 2, Mary E Bates 3
PMCID: PMC2556862  NIHMSID: NIHMS65099  PMID: 18729655

Abstract

Analysis of acoustic interactions between animals in active choruses is complex because of the large numbers of individuals present, their high calling rates, and the considerable numbers of vocalizations that either overlap or show close temporal alternation. The authors describe a methodology for recording chorus activity in bullfrogs (Rana catesbeiana) using multiple, closely-spaced acoustic sensors that provide simultaneous estimates of sound direction and sound characteristics. This method provides estimates of location of individual callers, even under conditions of call overlap. This is a useful technique for understanding the complexity of the acoustic scene faced by animals vocalizing in groups.

Keywords: bullfrog, chorus, advertisement call, auditory scene, microphone array


During their breeding season, male anurans of many species form aggregations or choruses in which they vocally advertise their presence, possession of a territory and willingness to mate. These choruses can be quite dense, both spatially and acoustically. This density imposes significant perceptual demands on the chorus residents. Males need to regulate the timing of their own calls to minimize interference or masking by the calls of neighbors and to facilitate efficient broadcasting of their calls to recipient females. Field recordings and playback experiments have identified particular strategies males adopt to solve this task, with synchrony of calls or alternation of calls between neighbors being the most common (reviews: Gerhardt & Huber, 2002; Wells & Schwartz, 2007). An individual male within a chorus can also acquire important information about the identity and location of other chorus members by listening to their calls (Boatright-Horowitz et al., 2000; Davis, 1987). Analysis of interactions between chorusing males suggests that males space themselves within choruses and respond to each other by means of certain behavioral “rules” (Boatright-Horowitz et al., 2000; Greenfield & Rand, 2000).

During their spring/summer breeding season, male bullfrogs (Rana catesbeiana) form nightly choruses in ponds or lakes and broadcast advertisement calls, both to attract females for mating and to advertise their presence to rival males. The structure of these choruses is typically quite stable, with individual male frogs occupying essentially the same locations over periods of days, weeks or even months (Boatright-Horowitz et al., 2000; Howard, 1980; Ryan, 1980). The possession of stable, well-defended territories and the prolonged breeding season facilitates familiarity among neighboring males. To a large extent, interactions between males are acoustically mediated. Each male produces advertisement calls periodically, but there are considerable between-male differences in both temporal and spectral properties of these calls, including differences in call rate, fundamental frequency and note duration (Bee & Gerhardt, 2001; Bee, 2004; Simmons, 2004). Within an active chorus, however, it is not always possible to distinguish calls of individuals by spectral or temporal properties alone because calls of multiple bullfrogs can occur simultaneously or with significant overlap in time. Moreover, successive notes from the calls of the same individual vary in envelope modulation (Suggs & Simmons, 2005), which produces additional spectral cues that may be difficult to segregate from those in notes of neighboring males. An alternative or supplemental means of individual identification is to identify the sources of calls using information about the relative spatial locations of males in the chorus. Some individuals are located in close proximity while others are spaced further apart, so that any given male receives an assortment of calls from other males in different directions and at different distances (Boatright-Horowitz et al., 2000). The use of both kinds of information (acoustic cues and spatial location) can provide the means of reliably distinguishing individual callers, and then describing their acoustic interactions with other callers. Our understanding of the structure and dynamics of frog choruses has been limited, however, by technical aspects involved in first recording and then sorting and identifying calls of individual males in a dense, noisy chorus in such a manner that all of the relevant information can be obtained.

Much of our knowledge of vocal interactions between chorusing male frogs is based on responses to sound playbacks by individual focal males (often separated from other chorusing males), or on recordings of natural vocal interactions between small groups (two through five) of callers within a larger chorus (e.g., Arak, 1983; Brush & Narins, 1989; Klump & Gerhardt, 1992; Rosen & Lemon, 1974; Schwartz, 1987). Much of this work relies on the use of single microphones for localizing and identifying calling males. While multi-channel recording and call monitoring systems have been described (Brush & Narins, 1989; Grafe, 1996; Schwartz et al., 2002), they have not as yet been widely adopted, even though such techniques offer the ability to analyze choruses over large spatial and temporal scales. Grafe (1997) monitored chorusing behavior of male painted reed frogs (Hyperolius marmoratus) during female phonotaxis using an array of four widely-spaced microphones. Locations of calling males were derived by triangulation based on arrival time differences of vocalizations at pairs of microphones. The focus of this study was on female preferences and not on chorusing dynamics, so vocal interactions between calling males were not analyzed in detail. The array used by Grafe (1997) is similar to those developed for analyses of songbird vocal behavior (McGregor, Dabelsteen, Clark, Bower, Tavares, & Holland, 1997; Merrill, Burt, Fristrup, & Vehrencamp, 2006). While these techniques are promising, recordings and interpretations of vocal interactions within dense, natural choruses taken as a whole and over long chorusing times remain relatively rare, both in anurans and in other chorusing animals (D’Spain & Batchelor, 2006; Greenfield & Snedden, 2003).

In this paper, we describe a technique developed for acoustic surveying that we adapted for the task of identifying individual male bullfrogs in a large active chorus. Our technique is based on audio recordings obtained using a pair of cube-shaped acoustic sensors, each containing multiple closely-spaced microphones, placed in two separate locations around the chorusing site. Data recorded by the sensors are subsequently processed by a computational model of the peripheral auditory system (Mountain, Anderson, Bresnahan, Brughera, Deligeorges, Hubbard, Lancia, & Vajda, 2007) that provides acoustic analysis of recorded sounds as well as estimates of sound source direction around each cube. Because information about the acoustic characteristics of bullfrog vocalizations and about the location of individual callers can be derived from the same records, this is a promising methodology to fully characterize acoustic interactions in noisy environments.

Method

Acoustic recordings were made between 2200 and 2400 hours during the months of June and July, 2005 and 2006, at a pond on private property in a heavily wooded location in central Massachusetts (Figure 1A). The pond is approximately 40 m long and 15 m wide at its widest point, and is surrounded by heavy vegetation. It supports a population of both bullfrogs and green frogs (Rana clamitans). At this site, some males are found close to the margins of the pond, while others are located in clumps of vegetation at some distance from the shore. On any given chorusing night, between 8 and 12 bullfrogs are actively vocalizing. Animals were not captured for visual inspection or morphological measurements, and thus were not disturbed or handled during sound recordings.

Figure 1.

Figure 1

A. Google Earth satellite photograph of the study site (north is to top). Because the pond is located entirely on one side of the array axis, only channels 1 and 3 were used from the left (L) cube, while channels 5 and 7 were used from the right (R) cube. B. Photograph of the two acoustic sensors (cubes) on top of the survey tripods. Each sensor is covered by a black foam windscreen. C. Photograph showing one of the 4-microphone sensors (cubes). D. Plot showing reconstruction of the circular movement of a loudspeaker moved to different locations around one of the 4-microphone arrays to illustrate fidelity of localization by arrival-time differences (lead-lag in μs). The test signal was a frequency sweep from 200 Hz to 1000 Hz.

Sensor Array

Vocalizations of male bullfrogs were recorded using two acoustic sensors (cubes) designed and constructed by the Department of Electrical and Computer Engineering at Boston University as part of a DARPA Acoustic Microsensors Program. The concept guiding this methodology is to use several widely-spaced complete sensor arrays, each consisting of multiple closely-spaced microphones, rather than a single array consisting of several widely-spaced single microphones (Grafe, 1997; McGregor et al., 1997; Merrill et al., 2006). Because the multiple microphones in each sensor array are close together (separated by 3.3 cm in our case), each one receives a nearly identical version of the propagating sound displaced only in time according to the spatial separation between the microphones. Thus, the signals at these microphones are highly correlated because they have undergone virtually identical scattering, reverberation, and filtering by the environment. This simplifies the time-to-angle (azimuth) transformation used to localize the source of the sound. Use of single microphones placed far apart (for example, 16 m in Grafe, 1997; 75 m in Merrill et al., 2006) allows recording of larger time differences between pairs of microphones to achieve potentially greater accuracy in azimuth; however, the signals arriving at pairs of widely-spaced microphones can be decorrelated by multipath effects and environmental filtering that would be different at each microphone. Even over short distances, decorrelation can be severe in areas of heavy vegetation. This would lead to inaccurate localization, thus offsetting the benefits of this approach.

In our application, we used two acoustic sensors (cubes) separated by 10 m (array axis shown in Figures 1A, 3A) placed on one side of the pond. The 10 m separation was chosen because this was the length of an area around the pond that was relatively clear of heavy vegetation. In principle, any separation can be chosen, but there are some constraints. If the sensors are too far apart, then some frogs will be picked up by only one cube, making triangulation difficult. If the sensors are too close together, then some frogs may be missed by both cubes. The sensors were each mounted on top of a vertical aluminum rod that was itself held in a position about 0.6 m above the ground by an adjustable survey tripod (Figure 1B). Each sensor consisted of a 3.3-cm aluminum cube with 4 sensitive, calibrated, omni-directional electret condenser microphones (Knowles Model FG3329), one placed in each of its four vertical faces (Figure 1C), and covered by a black foam windscreen. Over the 200-2000 Hz frequency range contained in bullfrog advertisement calls (Bee & Gerhardt, 2001; Wiewandt, 1969), the frequency response of these microphones is flat within ±2 dB. The accuracy of localization in azimuth of each sensor was calibrated by tracking the direction of a known sound source (a frequency modulated sweep from 200 Hz to 1000 Hz) moving at a fixed distance of 3 m in a circular direction 360° around the center of the cube. The estimates of direction made by each sensor match well the circular track of the sound source (Figure 1D), with deviations from this circular track of approximately 10 μs. The sensors can locate sound sources in 360° of azimuth; however, because the pond and the frogs were located entirely on one side of the array axis (Figure 1A; it was not feasible to place the cubes in the center of the pond), only two of the four microphones in each sensor provided useful data. In the left-most cube (L in Figure 1A), the two microphones and their recorded signals were designated as channels 1 (to the left) and 3 (to the right). In the other cube, located on the right (R in Figure 1A), channel 7 is the right-most microphone and channel 5 is the left-most microphone. These four microphones were aligned with respect to each other on the array axis by mounting a green laser pointer on top of each sensor and rotating that sensor until the green spot pointed at the other sensor 10 m away.

Figure 3.

Figure 3

(A) Map of pond showing locations of the two sensors (13, 57; solid dots) and coordinates for direction derived from time differences at the left (13) cube. Each cube provides a time estimate between -180 μs and +180 μs (corresponding to angles between -90° to the left and +90° to the right). A sound source located directly at a particular sensor would have an angle of 0°. (B) Estimated location of frog #1 (circled number) from the angle of intersection of the individual vectors (solid lines) from the two cubes.

Data Analysis

Signals picked up by the two active microphones in each sensor were carried by multi-conductor shielded coaxial cables to a custom-built power supply and preamplifier (gain 10X) and recorded on four channels of a Sony SIR-1000W wideband digital instrumentation recorder. The simultaneous sampling rate for the recordings was 48 kHz per channel. Binary files containing the four channels of data [two from the 13 (left) cube and two from the 57 (right) cube] were subsequently downloaded into a Pentium-3 PC using Sony PCScan programs supplied with the Sony recorder. These files were broken into 10 s long consecutive segments using custom-written MATLAB routines. They were then analyzed by a binaural computational model of the auditory system implemented in Matlab (Mathworks, Natick MA) and available online at the Boston University EarLab website (Mountain et al., 2007) for estimates of sound location.

The EarLab model incorporates 32 channels of partly-overlapping bandpass filter channels (sixth order Butterworth) with center frequencies spaced at logarithmic intervals from 60 Hz to 5000 Hz. The audio signal is thus segmented into these frequency bands, to facilitate identification of a signal in the appropriate frequency band for bullfrog advertisement calls, and a threshold value is set for each band so as to avoid triggering on background noise. The model then makes a running estimate of arrival time differences (range -180 μs to +180 μs; Figures 2, 3A) at each frequency band between microphone pairs in overlapping (50% overlap) 100 ms bins; this degree of overlap was used because it allowed smooth interpolation between separate samples. For a single bullfrog call note with a duration of 500 ms, at least five time estimates are made at each sensor. These time estimates are then pooled across all filter frequencies by generating a histogram (bin width 10 μs) of time differences at these different frequencies, with the final estimate determined to be at the peak of the histogram. The model then plots peaks in successive histograms to give the running history of time differences across the entire 10 s long data segment (top plots in Figure 2). Consequently, for each 10 s segment analyzed, the resulting dataset consists of a series of up to 250 separate time difference estimates between the pairs of microphones in each sensor. The intrinsic timing accuracy of each cube is based on the sampling rate of the data used in the processing programs, which was 100 kHz (achieved by upsampling the recorded files). At the sampling interval of 10 μs used for processing, out of a total intermicrophone time separation of 180 μs at 90° (Figure 3A), this yields an angular accuracy of about 6° assuming a good signal-to-noise ratio for recording. Under noisy conditions, accuracy could be worse. Calibration of the cubes with test signals in the frequency range of bullfrog calls confirmed that the cubes plus the processing software could reliably locate sound sources in angle bins of 10°.

Figure 2.

Figure 2

Six note call of single bullfrog (frog #1; see Figure 3). Top: Plots showing time differences (y axes, Δtime in μs) over 10 sec of recording (x axes) for arrival of sounds at the left (13) cube and at the right (57) cube separately. Each data point (gray circle) represents the output of the auditory model for a 100 msec segment of the sound waveform. Middle: Spectrogram showing the position of the second, third, and fourth harmonics (arrows) in each of the six notes. Bottom: Sound pressure waveform recorded at channel 3 of the left cube. The two waveforms preceding and following the notes of frog #1 are calls of the green frog (R. clamitans).

Direction and distance information are obtained by comparing the time difference estimates for a given sound source at each sensor by a process of vector triangulation. That is, the model displays a running vector emanating from each sensor to each sound source. The point of intersection of these vectors gives an estimate of the location of the source (Figure 3B). To correlate the direction estimates derived from the sensors with actual locations of the calling bullfrogs, one observer visually censused the pond during recordings and marked the location of each animal on a scaled map.

To determine the spectral content of each recorded sound, the four-channel binary files were further separated into two stereo .wav files, one for channels 1 and 3 from the left sensor and the other for channels 5 and 7 from the right sensor. General acoustic characteristics of the calls (duration, duty-cycle, harmonic frequencies, onset time) were analyzed with custom-written MATLAB routines and then displayed as spectrograms and sound pressure waveforms using Adobe Audition v 1.5 (Adobe Systems, San Jose CA).

Results

To illustrate the usefulness of the recording method, data presented here are drawn from 8000 s of continual chorus recording from one night (070906). On this particular night, 12 male bullfrogs were vocalizing. Visual and audio analysis of calling patterns within this chorus revealed that males often overlapped or synchronized their calling with other males. Approximately 640 s of the total recording time (about 8% of the total) consisted of an individual male vocalizing alone, so that none of the individual notes in his advertisement call overlapped or alternated with those from another male (Figure 2). This kind of acoustic pattern could be easily analyzed by ear, or by single-microphone recordings from individual focal males. On the other hand, a considerably larger proportion (about 38%) of the total chorus time, approximately 3000 s on the sample night, consisted of multiple (2-5) frogs calling together, with both overlap and alternation of individual notes in their advertisement calls (Figures 4 and 6). It is this kind of calling pattern that is difficult to analyze by ear or by recordings from individual focal males, and where knowledge of location becomes valuable for segregating the acoustic contribution from each individual.

Figure 4.

Figure 4

Plots showing time differences (Δtime in μs) from the two cubes (top two plots), spectrogram (middle plot), and sound pressure waveform (bottom) for a 20 s segment in which three bullfrogs are calling. Two of these animals (labeled as frog #7 and frog #8) call in note-by-note alternation. All three animals can be distinguished both by location with respect to the 13 cube (dashed lines for each identified individual in the top plot). The animals cannot be clearly separated by the 57 cube alone. The spectrograms show that the individual notes of these three bullfrogs differ in second harmonic frequency. The initial note in the spectrogram display (at about 30 s) is from a different individual, not identified further in this figure.

Figure 6.

Figure 6

Plots showing time differences from the two cubes (top two plots), spectrogram (middle plot), and sound pressure waveforms (bottom) for a 20 s segment in which multiple bullfrogs are calling. In the time difference plots, clusters of circles (estimates of location) for the individual bullfrogs are joined by dashed lines for clarity. Because of the overlap of notes in the calls of these individuals and the considerable background noise present during recordings, there are fewer estimates than for the data in Figures 2 and 4. The smaller number of estimates increases the error of localization. In this example, the location of frog #12 cannot be reliably derived from the output of the 13 cube, while the location of frog #11 cannot be derived from the output of the 57 cube.

An example of the output of the model for one animal (frog #1) calling alone (total time interval 10 s) is shown in Figure 2. This recording was taken under conditions of minimal background noise (good signal-to-noise ratio). Time difference estimates (Δ time in μs) derived from each cube separately are shown in the top two plots; the spectrogram is shown in the middle blot, and the sound pressure waveform recorded at microphone #3 is shown in the bottom plot. As indicated in the time waveform, this bullfrog’s advertisement call consists of six individual notes (croaks), with durations varying from 542 (first note) to 631 (last note) ms. The spectrogram shows that each note consists of a series of harmonics ranging up to around 400 Hz (for clarity, upper harmonics are omitted) with a dominant second harmonic frequency of 210 Hz for each note. Both the time waveform and the spectrogram together provide frequency and amplitude information. Direction of the sound source (bullfrog) with respect to the two cubes can be derived from the top two plots. Each of these plots gives the time difference estimates for consecutive, overlapping 100 ms segments in each note, as well as for any background sounds present that exceed the thresholded value. The plot for the 13 (left) cube shows estimates of arrival times for each note at around +150 μs, which calculates to a direction of about 55° to the right of this sensor. Conversely, the plot for the 57 (right) cube shows estimates of arrival times at about -175 μs, which calculates to a direction of about 85° to the left of this sensor. The plot for the 57 cube also shows the presence of other sounds at about +100 μs occurring both before and after this series of notes. We acoustically identified these sounds as the call of the green frog. Figure 3B shows the triangulated location (circled) of frog #1, based on the intersection of the direction vectors from the two cubes, on a map of the pond. This particular animal is located on the near side of the pond, to the immediate left of the 13 cube.

A 20 s segment of vocal activity from three bullfrogs is shown in Figure 4. This segment was chosen for display because it represents a very common calling pattern in the chorus on this particular recording night. The sound pressure waveforms and spectrogram show that one frog calls individually, and his calls are followed by calls of two animals whose individual notes alternate. These latter two frogs were the most frequent callers on the sample night, with this pattern of note-by-note alternation occurring often (280 occurrences total in 8000 s of recording). The time difference plot for the 13 cube (top plot) indicates that these three frogs (labeled frog #6, frog #7, and frog #8; labels were given to individuals in the order in which they are localized in the data analysis) are located in different directions with respect to this cube. The cube gives an estimate of direction of about -175 μs for frog #6, about -120 μs for frog #7, and about +110 μs for frog #8. The time difference estimates from the 57 cube indicate that these bullfrogs are all located to the far left of this sensor (at about -175 μs), but that the three individuals cannot be separated from this cube alone. That is, use of the microphones in this cube alone (and, by extension, use of one microphone at this particular location) cannot separate the calls of these two animals but provide a general location direction. By the use of triangulation, however, estimates of location can be made for these three animals, as shown in Figure 5. Frogs #7 and #8 are separated by about 15 m, indicating that farther neighbors do engage in call alternation.

Figure 5.

Figure 5

Map of the pond showing locations of twelve calling bullfrogs (numbered circles) based on intersections of the vectors derived from time differences at the left (13) and right (57) cubes. Multiple estimates were made to determine the locations of each frog.

Figure 5 also shows the triangulated locations of the other animals we identified in the chorus. Each of these triangulations was made on the basis of multiple location estimates over the entire 8000 s of recording, combined with analysis of the spectrograms of the calls to ensure that closely-spaced individuals were in fact different frogs and not the same frog moving in location over the course of the recordings. Animals are not evenly or uniformly spaced around the pond but more often are found in clusters. Because of the angular accuracy of the cubes (6 to 10° for strong signals), we cannot, particularly for animals located at some distance from the cubes, specify the exact width of each animal’s territory.

Figure 6 shows an example of a complex calling bout in which five frogs participated, and which took place against high levels of background wind noise (as indicated by the time domain waveform). This example was chosen because it shows both the usefulness and the limitations of the sensor array in identifying and localizing individuals in a noisy chorus, particularly when the males are close together and when their calls overlap, and when the signal-to-noise ratios of recordings may have been degraded. Notes from the individual animals cannot be easily separated in amplitude from the time domain waveform or in harmonic frequencies from the spectrogram, and it is not clear from either of these displays how many animals are vocalizing. The output of the model (triangulated locations in Figure 5) suggests that three of the frogs (frog #9, frog #10, and frog #11) are located very close together on the opposite side of the pond from the array. Frog #9 and frog #10 call in synchrony with almost complete note overlap, with frog #11 calling in partial overlap with these two. These particular animals were among the most inactive callers in the chorus, but when they did vocalize, they did so in this pattern of synchronous calling (about 40 occurrences in 8000 s of recording).

For the data in Figure 6, the time difference plots from each cube give slightly different information showing that the output from both cubes together is needed to separate the individuals. The plot from the 13 cube suggests that calls of four frogs (frogs #6, #9, #10, #11) can be separated by location. But, because of the overlap of sound sources (both bullfrog call notes and extraneous background noise that exceeded the thresholded value) from different locations occurring within the same time window used for deriving location estimates, fewer discrete data points (gray circles) are available in the model output, and there are fewer estimates for each call note and more scatter in the data. The plot from the 57 cube also identifies frogs #6, #9, and #10, but cannot separate out frog #11. This plot also indicates the presence of yet another individual, here designated as frog #12, located to the far left of this cube. There is no consistent estimate of location for frog #12 in the plot from the 13 cube, however, because of the wide scatter in the estimates (shown by the dashed line and question mark) due to weak recorded signals. Because only the vector direction from the 57 cube is known, a triangulated location for this bullfrog cannot be made from this particular segment of recording. This example shows that sensor array must make multiple estimates in order to accurately triangulate a location. Presumably, however, each individual frog must itself make multiple listening estimates in order to localize its rivals. In fact, estimates of direction from other segments of the recording produce an estimate for location of frog #12 to the right of frog #9 (Figure 5).

Discussion

The goal of our project was to adapt an acoustic recording array using multiple closely-spaced microphones to facilitate identification of individual bullfrogs in an active chorus on the basis of both their spatial location and the acoustic characteristics of their calls. Advantages of our system include the small size and portability of the sensors and the ease in setting up the array. A large amount of data can be recorded from the multiple microphones in each sensor simultaneously, thus providing excellent temporal synchrony between events on each channel. The use of the sensors permits recording and analysis of all vocalizations of all animals in the chorus, even those where vocalizations overlap in time, because they facilitate separation of overlapping calls in terms of both directions and distances. Moreover, when combined with subsequent acoustic processing, the sensor array can distinguish frogs by the spectral characteristcis of their calls as well. The difficulty of segregating calls from individual males (Figure 6) without the location information provided by the sensor array shows the importance of having this information for accurate description of the chorus. These data allow the parsing of complex acoustic interactions between individuals, which may play a vital role in chorus dynamics.

We note that our array technique is not necessarily superior to other array techniques (Grafe, 1997; McGregor et al., 1997; Merrill et al., 2006). However, the small distance between the microphones in our sensors mitigates problems of decorrelation at widely-separated microphones caused by environmental transmission effects. In our experience, such decorrelation is underestimated as a difficulty in field recordings. An ideal set-up would include several multiple-microphone sensors placed at multiple locations around the calling site, although the computational demands of such a system would be a serious disadvantage. Even with the two sensors we used here, the computational load required to retrieve and process the data both in terms of spatial location and in terms of spectral analysis of sounds is considerable. Another disadvantage of our system lies in its accuracy for estimating locations. Because of the shape of the calibration curve (Figure 1D), small errors in time difference estimates could translate into larger errors in direction at increasing distances from the array. As the data in Figure 6 show, there can be considerable scatter in these estimates, particularly under realistic conditions of overlapping calls. Localization accuracy would presumably have been improved if all 4 microphones in each sensor were active. The array could also be tailored specifically to the frog species of interest; for example, increasing the microphone spacing to the first harmonic frequency of the advertisement call (100 Hz or 30 cm in the case of the bullfrog) might improve localization accuracy. Another limitation of our study is the spatial constraints we experienced in placement of the sensors. We chose the configuration in Figure 1A because of difficulties in placing the sensors in the middle of the pond, the limited amount of cleared space around the margins of the pond, and because visual observations indicated the presence of male bullfrogs closer to and to the right of the 57 (right) cube. These particular bullfrogs did not, however, vocalize during our recording sessions. Because most of the vocalizing animals were located far to the left of the 57 cube, the microphones in that sensor provided very similar estimates of location for some animals (Figure 4). These animals could be distinguished, however, by the spectral characteristics of their calls, showing the importance of spectral information, and by the information from the 13 cube, showing the importance of multiple sensors. These examples also show that errors in pinpointing the actual location of the calling bullfrogs could arise from suboptimal placement of the two sensors. Some prior knowledge of chorus structure could facilitate proper placement of the sensors; however, movements of individual males from one calling site to another may occur between recording nights and thus complicate positioning.

Some of the details of our acoustic analyses confirm other reports on bullfrog advertisement calling, showing that the recording array is appropriate for identification of these vocalizations. This is important because these previous data, although collected during active choruses (Bee & Gerhardt, 2001; Bee, 2004; Simmons, 2004; Suggs & Simmons, 2005; Wiewandt, 1969), are based on recordings from focal animals, where presumably the individual notes chosen for analysis did not occur in overlap with the notes of other frogs. Consistent with the work of Bee (Bee & Gerhardt, 2001; Bee, 2004), we observed individual differences in the second harmonic frequencies in the notes (in particular, first notes) of calls of individual bullfrogs. These individual differences aid in separating the calls of closely-spaced males. Analysis of acoustic interactions between males at another bullfrog chorus site suggested that bullfrogs call preferentially in response to far, as opposed to near, neighbors (Boatright-Horowitz et al., 2000). Because of the difficulty in separating overlapping notes from analog tape recordings, those data were based only on analysis of calling patterns where no note overlap occurred. The data reported here, which includes analysis of overlapping notes, show that two of the most active callers alternated individual notes with farther neighbors (Figure 4), while closely spaced males called in synchrony with their close neighbors (Figure 6). This suggests that males may adopt different calling strategies depending on intermale spacing. Further work using the sensory arra will more closely examine these patterns of interactions, and will examine their stability over multiple recording nights.

Acknowledgments

Equipment for producing the sensor array was provided by a grant from the Department of Defense University Research Instrumentation Program to James A. Simmons. The acoustic sensors were designed and built at Boston University for the Acoustic Microsensors Program at the Defense Advanced Research Projects Agency. Support for collection of field data and for data analysis was provided by NIH grant R01 DC05257 to Andrea M. Simmons and ONR contract N00014-04-1-0415 to James A. Simmons. We thank Socrates Deligeorges, Brett Cropp, and Paulo Guilhardi for assistance.

Contributor Information

Andrea Megela Simmons, Departments of Psychology and Neuroscience, Brown University, Providence R.I., U.S.A..

James A. Simmons, Department of Neuroscience, Brown University, Providence R.I., U.S.A.

Mary E. Bates, Department of Psychology, Brown University, Providence R.I., U.S.A.

References

  1. Arak A. Vocal interactions, call matching and territoriality in a Sri Lankan treefrog, Philautus leucorhinus (Rhacophoridae) Animal Behaviour. 1983;31:292–302. [Google Scholar]
  2. Bee MA. Within-individual variation in bullfrog vocalizations: Implications for a vocally mediated social recognition system. Journal of the Acoustical Society of America. 2004;116:3770–3781. doi: 10.1121/1.1784445. [DOI] [PubMed] [Google Scholar]
  3. Bee MA, Gerhardt HC. Neighbor-stranger discrimination by territorial male bullfrogs (Rana catesbeiana): I. Acoustic basis. Animal Behaviour. 2001;62:1129–1140. [Google Scholar]
  4. Boatright-Horowitz SL, Horowitz SS, Simmons AM. Patterns of vocal response in a bullfrog (Rana catesbeiana) chorus: Preferential responding to far neighbors. Ethology. 2000;106:701–712. [Google Scholar]
  5. Brush JS, Narins PM. Chorus dynamics of a Neotropical amphibian assemblage: comparison of computer simulation and natural behaviour. Animal Behaviour. 1989;37:33–44. [Google Scholar]
  6. Davis MS. Acoustically mediated neighbor recognition in the North American bullfrog, Rana catesbeiana. Behavioral Ecology and Sociobiology. 1987;21:185–190. [Google Scholar]
  7. D’Spain GL, Batchelor HH. Observations of biological choruses in the Southern California Bight: A chorus at midfrequencies. Journal of the Acoustical Society of America. 2006;120:1942–1955. doi: 10.1121/1.2338802. [DOI] [PubMed] [Google Scholar]
  8. Gerhardt HC, Huber F. Acoustic communication in insects and anurans: Common problems and diverse solutions. University of Chicago Press; Chicago: 2002. [Google Scholar]
  9. Grafe TU. The function of call alternation in the African reed frog (Hyperolius marmoratus): precise call timing prevents auditory masking. Behavioral Ecology and Sociobiology. 1996;38:149–158. [Google Scholar]
  10. Grafe TU. Costs and benefits of mate choice in the lek-breeding reed frog, Hyperolius marmoratus. Animal Behaviour. 1997;53:1103–1117. [Google Scholar]
  11. Greenfield MD, Rand AS. Frogs have rules: Selective attention algorithms regulate chorusing in Physalaemus pustulosus (Leptodactylidae) Ethology. 2000;106:331–347. [Google Scholar]
  12. Greenfield MD, Snedden WA. Selective attention and the spatio-temporal structure of orthopteran choruses. Behaviour. 2003;140:1–26. [Google Scholar]
  13. Howard RD. The evolution of mating strategies in bullfrogs, Rana catesbeiana. Evolution. 1978;32:850–871. doi: 10.1111/j.1558-5646.1978.tb04639.x. [DOI] [PubMed] [Google Scholar]
  14. Klump GM, Gerhardt HC. Mechanisms and function of call-timing in male-male interaction in frogs. In: McGregor PK, editor. Playback and studies of animal communication. Plenum Press; New York: 1992. pp. 153–174. [Google Scholar]
  15. McGregor PK, Dabelsteen T, Clark CW, Bower JL, Tavares JP, Holland J. Accuracy of a passive acoustic location system: empirical studies in terrestrial habitats. Ethology Ecology & Evolution. 1997;9:269–286. [Google Scholar]
  16. Merrill DJ, Burt JM, Fristrup KM, Vehrencamp SL. Accuracy of an acoustic location system for monitoring the position of duetting songbirds in tropical forest. Journal of the Acoustical Society of America. 2006;119:2832–2939. doi: 10.1121/1.2184988. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Minckley RL, Greenfield MD, Tourtellot MK. Chorus structure in tarbush grasshoppers: inhibition, selective phonoresponse and signal competition. Animal Behaviour. 1995;50:579–594. [Google Scholar]
  18. Mountain D, Anderson D, Bresnahan G, Brughera A, Deligeorges S, Hubbard A, Lancia D, Vajda V. EarLab: A virtual laboratory for auditory experimentation. 2007 http://scv.bu.edu/SCV/vizgal/earlabnew/earlab.html.
  19. Rosen M, Lemon RE. The vocal behavior of spring peepers, Hyla crucifer. Copeia. 1974;1974:940–950. [Google Scholar]
  20. Ryan MJ. The reproductive behavior of the bullfrog (Rana catesbeiana) Copeia. 1980;1980:108–114. [Google Scholar]
  21. Schwartz JJ. The function of call alternation in anuran amphibians: A test of three hypotheses. Evolution. 1987;41:461–471. doi: 10.1111/j.1558-5646.1987.tb05818.x. [DOI] [PubMed] [Google Scholar]
  22. Schwartz JJ. Male calling behavior, female discrimination and acoustic interference in the Neotropical treefrog Hyla microcephala under realistic acoustic conditions. Behavioral Ecology and Sociobiology. 1993;32:401–414. [Google Scholar]
  23. Schwartz JJ, Buchanan B, Gerhardt HC. Acoustic interactions among male gray treefrogs (Hyla versicolor) in a chorus setting. Behavioral Ecology and Sociobiology. 2002;53:9–19. [Google Scholar]
  24. Simmons AM. Call recognition in the bullfrog, Rana catesbeiana: Generalization along the duration continuum. Journal of the Acoustical Society of America. 2004;115:1345–1355. doi: 10.1121/1.1643366. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Suggs DN, Simmons AM. Information theory analysis of patterns of amplitude modulation in the advertisement call of the male bullfrog, Rana catesbeiana. Journal of the Acoustical Society of America. 2005;117:2330–2337. doi: 10.1121/1.1863693. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Wells KD, Schwartz JJ. The behavioral ecology of anuran communication. In: Narins PM, Feng AS, Fay RR, Popper AN, editors. Hearing and sound communication in amphibians. Springer-Verlag; New York: 2007. pp. 44–86. [Google Scholar]
  27. Wiewandt T. Vocalization, aggressive behavior, and territoriality in the bullfrog, Rana catesbeiana. Copeia. 1969;1969:276–285. [Google Scholar]

RESOURCES