Abstract
Relatively few empirical data are available concerning the role of auditory experience in nonverbal human vocal behavior, such as laughter production. This study compared the acoustic properties of laughter in 19 congenitally, bilaterally, and profoundly deaf college students and in 23 normally hearing control participants. Analyses focused on degree of voicing, mouth position, air-flow direction, temporal features, relative amplitude, fundamental frequency, and formant frequencies. Results showed that laughter produced by the deaf participants was fundamentally similar to that produced by the normally hearing individuals, which in turn was consistent with previously reported findings. Finding comparable acoustic properties in the sounds produced by deaf and hearing vocalizers confirms the presumption that laughter is importantly grounded in human biology, and that auditory experience with this vocalization is not necessary for it to emerge in species-typical form. Some differences were found between the laughter of deaf and hearing groups; the most important being that the deaf participants produced lower-amplitude and longer-duration laughs. These discrepancies are likely due to a combination of the physiological and social factors that routinely affect profoundly deaf individuals, including low overall rates of vocal fold use and pressure from the hearing world to suppress spontaneous vocalizations.
INTRODUCTION
Laughter and auditory experience
Laughter is a nonverbal mode of communication that occurs on its own as well as in the context of spoken language. It is believed to be common to all humans (Darwin, 1872; Provine and Fischer, 1989; Provine and Yong, 1991), and has been described as being both innate and universal (Hirson, 1995; Provine, 2000). Laughter has been shown to occur in a variety of cultures (e.g., India: Savithri, 2000; Norway: Svebak, 1975; Papua New Guinea: Eibl-Eibesfeldt, 1989; Tanzania: (Rankin and Philip, 1963); United States: Bachorowski, et al., 2001), and to be produced across ages and genders (infants and children: Grammer and Eibl-Eibesfeldt, 1990; Hall and Allin, 1897; Mowrer, 1994; Nwokah, et al., 1993; Sroufe and Wunsch, 1972; adults: Bachorowski et al., 2001; Hall and Allin, 1897; LaPointe et al., 1990).
Recognizable laughter has furthermore been reported in case studies of individual children that were either deaf or both deaf and blind (Black, 1984; Eibl-Eibesfeldt, 1989), as well in seven infants with profound hearing loss, whose vocal production was compared to that of normally hearing individuals in the first 12 months of life (Scheiner et al., 2002; 2004). These developmental results thus provide clear evidence that direct auditory experience with laughter by others is not necessary for the emergence of these sounds. Provine and Emmorey (2006) have provided evidence of basic similarities between laughter in deaf and hearing adults as well, reporting both that the former produced normal-seeming laugh sounds, and that these vocalizations occurred predominantly during pauses and at phrase boundaries occurring in sign-language production. This last finding provides a parallel to Provine’s (1993) report that laughter “punctuates” rather than interrupting speech flow in normally hearing talkers. However, only Scheiner et al. (2002; 2004) reported acoustic analyses in this earlier work, and their limited sample size did not allow detailed comparison of laugh acoustics in the impaired and normally hearing groups. Inferences about possible effects of experience on finer-grained aspects of vocal acoustics were also hindered by the fact that each of the impaired infants received a hearing aid during the course of the study.
While these reports indicate that auditory (and visual) experience is not necessary for the emergence of normative laughter, there is also evidence to suggest that social learning can influence the acoustic structure of these sounds. For example, LaPointe et al. (1990) noted differences in several measures of laughter in 20- versus 70-year-old adults, including number of laughs, laugh rate, and pitch-related characteristics (see also Apte, 1985). Less direct evidence includes the observation that laughter shows significant acoustic variability both within and among adult vocalizers (Bachorowski et al., 2001; Hall and Allin, 1897). Although age, gender, and individual differences in both vocal-tract physiology and sense of humor are likely to be important contributors (LaPointe et al., 1990; Mowrer et al., 1987; Nwokah et al., 1993), finding this kind of variability also raises the possibility of social or other learning.
On the one hand, then, laughter is considered to be innate, in the limited sense that direct auditory experience with laughter is not necessary to its emergence in recognizable form. On the other hand, relatively few directly relevant acoustical data are as yet available, for instance, from adult humans who have had little or no opportunity to experience laugh sounds. The current work approached this problem by identifying college-age adults with maximally diminished auditory capabilities and comparing their laughter to that produced by peers with normal hearing. Although the two groups are for simplicity referred to as differing in “auditory experience,” lack of aural input is probably only one of a number of ways in which the hearing-impaired individuals tested here differed from the control participants. As a result of not hearing their own vocalizations, for instance, these deaf laughers would also have had less opportunity to acquire experience-guided control over concomitant respiratory functions, as well as laryngeal, oral, and other vocal-tract musculature.
Basic acoustic properties of laughter
The goal of the study was thus to examine the laughter produced by deaf vocalizers, comparing its acoustic features to laughter from normally hearing individuals. The laugh-elicitation procedures and acoustics measures used were drawn from previous work, relying most heavily on Bachorowski et al.‘s (2001) study of 97 normally hearing male and female college students. While finding a high degree of acoustic variability both within and among the individual vocalizers (see also Edmonson, 1987), Bachorowski et al. also provided quantitative evidence concerning stable features such as a basic distinction between voiced and unvoiced laughters (see also Grammer and Eibl-Eibesfeldt, 1990). The former is laughter based on vowel-like, somewhat melodic sounds produced through regular, synchronized vocal-fold vibrations in the larynx. The latter, in contrast, is laughter based on noisy sounds in which the vocal folds either do not vibrate or vibrate in an irregular, desynchronized fashion. Although laughter has been argued to consist primarily of stereotyped bouts of vowel-like bursts (e.g., Provine and Yong, 1991; but see Kipper and Todt, 2003), Bachorowski et al. found that their participants produced a higher proportion of unvoiced or mixed laughs than the voiced variety.
Other findings included that the basic vocal-fold vibration rate fundamental frequency F0 in voiced laughs was approximately twice that of comparable voiced sounds in spoken language and that neither voiced nor unvoiced laughter showed distinct vowel qualities, such as “ha-ha,” “hee-hee,” or “ho-ho.” Differentiated vowel sounds are largely produced through flexible positioning of the tongue, mandible, and lips to change the resonance characteristics, or formants, of the vocal-tract cavities above the larynx. Bachorowski et al. reported little acoustic evidence of these kinds of articulation effects, specifically in finding some variability in the first formant (F1), very little in the second (F2), and that both formants fell predominantly in the middle of their respective ranges as observed in speech sounds. The conclusion was therefore that laughter is typically characterized by a neutral, “schwa-like” vowel quality, and is likely produced without consistently patterned repositioning of articulatory structures.
Nonauditory influences on laughter in deaf vocalizers
Several factors that could alter the laughter of deaf relative to normally hearing participants were also identified. For example, other aspects of vocal production by the deaf have been found to be different than in hearing individuals, including slow, monotone speech with a breathy or harsh quality (e.g., Leder and Spitzer, 1993; Okalidou and Harris, 1999; Osberger and Levitt, 1979; Osberger and McGarr, 1983). These kinds of differences are arguably expected given that normative speech development is heavily dependent both on auditory experience and on social modeling. Some researchers have also highlighted the possibility of deficient laryngeal and oral muscle control in the deaf (LaPointe et al., 1990; Okalidou and Harris, 1999). Such deficiencies could contribute to the slower, elongated vowels reported for deaf speech (Bakkum et al., 1995; Okalidou and Harris, 1999) and might also affect laugh production. Rhythm has also been found to be affected, for example, in phoneme and syllable timing (Rothman, 1976).
Differences in laughter produced by deaf and hearing participants could also occur as a result of individuals in the two groups responding differently to the stimulus material used to elicit laughter. Hearing-impaired persons who use sign language as a primary form of communication share a culture that can be significantly different from that of normally hearing peers, including aspects of humor (e.g., Padden and Humphries, 1990; Ladd, 2003). If so, discrepancies in laugh acoustics could occur as a result of differences in the degree or valence of emotional reactions during experimental elicitation of laughter. Finally, deaf individuals report experiencing social pressure to suppress spontaneous vocalizations, as these can be uncomfortably loud for the hearing (Leder and Spitzer, 1993). In the current work, such tendencies were anticipated to potentially decrease the amount of laughter occurring, increasing the tendency to muffle laughter sounds, for example, by keeping the mouth closed and arguably producing more laughter that might normally be associated with low-arousal, emotional responses even when vocalizers were experiencing more intense reactions.
While each of these factors could potentially contribute to differences between laughter in deaf and hearing individuals, the documented variability of laugh sounds both within and among individual vocalizers (e.g., Bachorowski et al., 2001) could have the opposite effect. Specifically, given the sample-size limitations inherent to targeting a group as small as congenitally and profoundly deaf college students, a high degree of variability could obscure differences that do exist in laugh sounds occurring in the respective underlying populations. Results of the acoustic analyses will be considered in light of each of these factors, all of which are concluded to likely play some role in the outcomes.
The current study
The laughter samples analyzed in the current work were collected from hearing-impaired college students attending Gallaudet University (Washington, D.C.), and from normally hearing students at Cornell University (Ithaca, NY). Laughter was recorded from participants using headworn microphones as these individuals watched a series of comic movie clips. As the goal for the hearing-impaired sample was to identify individuals with the least possible exposure to laugh sounds, screening was quite restrictive. Although Gallaudet is the “first and still only university for deaf and hard of hearing people in the world” (http://www.gallaudet.edu/x251.xml), the restrictiveness of the selection criteria still strongly limited sample size for hearing-impaired participants. Specifically, these individuals were required to have congenital and bilateral hearing loss that placed them in the most severely compromised, “profoundly deaf,” category of auditory impairment. Results nonetheless produced telling evidence that characteristic laughter occurs in humans in the virtual absence of auditory experience.
METHODS
Subjects
Eight female and five male students enrolled at Gallaudet University (GU) and six female and six male students enrolled at Cornell University (CU) were recruited for this study. GU students were recruited with fliers posted on campus, while CU participants were recruited through an experiment-participation website. The recruits were instructed to come to a laboratory on their respective campuses accompanied by a same-sex friend and were tested in pairs. Total initial participation was thus 26 GU and 24 CU students.
All participants were screened for fluency in written English, good or corrected-to-good vision, and the absence of respiratory ailments. GU and CU students reported American sign language (ASL) and English to be their primary or native languages, respectively. GU participants were also asked to report their age at diagnosis of deafness, severity and type of deafness, and history of hearing-aid use. All participants were tested, but only those who met the criteria of being diagnosed in infancy with bilateral and profound deafness were included in the study (with one exception). Profound deafness was defined as a hearing loss of over 90 dB, which corresponds to an inability to understand speech with or without amplification (hearing aids), and an awareness only of vibration rather than of sound. One male GU participant who was congenitally, bilaterally, and severely deaf was also included. Severe deafness was defined as a 71–90 dB loss, which translates to an inability to understand speech without amplification. At this level of deafness, speech can be understood only through a combination of speech-reading and auditory support. This severely deaf participant had never used hearing aids or other auditory facilitation, and his experience with sound was therefore deemed comparable to that of profoundly deaf individuals who did have a history of hearing-aid use.
Recordings from seven GU students were excluded from analysis due to recent hearing-aid use (less then 6 years before), failure to meet the hearing-loss criteria, or because they did not laugh during the session. CU participants were screened for self-reported normal audition, and constituted the hearing group for this study. Data from one CU female were discarded due to poor recording quality. In the end, 19 GU (12 profoundly deaf females, six profoundly deaf males, and one severely deaf male) and 23 CU participants constituted the deaf and hearing groups, respectively.
Materials and procedures
Stimuli
Eight short movie clips compiled on a digital video disk (DVD) using Mac OSX iMovie and iDVD software were used as stimuli (see Table 1). Half of the clips were meant to be funny and were chosen from comedy movies for their laugh-inducing potential. The rest were taken from dramas or science-fantasy films and were meant to be emotion inducing but not humorous. The latter were included in the DVD in order to make the cover story as plausible as possible. Specifically, participants were told that the study was testing the effect of emotion on respiration, and that audiorecording was being used to document each person’s breathing for later analysis. In order to appeal to both deaf and hearing students, the clips emphasized physically based action with minimal reliance on dialog. However, all clips did include English-language subtitles. Deaf students viewed the movie clips without sound, while hearing students viewed the clips at a preset, low volume.
Table 1.
Movie clips used as stimuli listed by movie title, clip description, and clip duration. Clips are labeled as either humorous (h) or nonhumorous (nh), and are listed in order of presentation during the session.
| Movie title | Clip description | Dur (s) | |
|---|---|---|---|
| Robin Hood: Men in Tights | Robin Hood returns to Locksley only to find his castle is being repossessed | (h) | 195 |
| Harry Potter and the Sorcerer’s Stone | Harry and Ron battle a troll that has cornered Hermione in the girl’s bathroom | (nh) | 207 |
| Grumpy Old Men | Max and John engage in a war of pranks | (h) | 150 |
| The Trouble with Mr. Bean | Mr. Bean goes to the dentist | (h) | 207 |
| Ocean’s Eleven | The Ocean’s 11 team attempts to blow up a casino vault door | (nh) | 255 |
| The Naked Gun | Detective Nordberg has a series of mishaps as he attempts to stop a cocaine deal | (h) | 135 |
| Reign of Fire | Young Quinn watches as his mother accidentally wakes a dragon | (nh) | 144 |
| Cast Away | Chuck is spotted by a ship while floating in the ocean on a homebuilt raft | (nh) | 287 |
Apparatus
Participants were seated in heavy-duty, metal and cloth picnic chairs oriented toward a 20 in. color television. Participant vocalizations were recorded using a Special Projects head-worn microphone, with the microphone arm running parallel to the cheek, and the tip positioned 1 in. from the left corner of the mouth. All other recording equipment was located in an adjacent control room. The microphone signal was routed through a Whirlwind SP1×3 microphone splitter, and the two resulting signals were recorded on the left and right channels of a Marantz CDR300, professional-grade compact disk (CD) recorder. To ensure high-quality recording of laughs produced at any amplitude, the two channels were set at different, standardized recording levels. Recording levels were set prior to each session based on a constant-amplitude, 700 Hz tone produced by a Shure AT15 tone generator with an Audio-Technica AT8202 in-line attenuator set at −10 dB. Recordings were made by using a 44.1 kHz sampling rate and archived on CDs. The movie-clip DVD was played on a Apex AD-660 DVD player, also located in the control room. Acoustic data were first processed using PRAAT acoustics software (Boersma, 2001), and then analyzed with ESPS∕WAVES+5.3 (“XWAVES;” Entropic Research Laboratory). Statistical analysis was conducted using the NCSS 2000 (Jerry Hintze, Kaysville, UT) and SAS 9.2 (SAS Institute, Cary, NC) statistics packages.
Testing procedure
Participants came to the laboratory in same-sex, friend pairs under the impression that they would be involved in a study investigating a possible link between emotion and breathing. The participants were seated next to each other and told that their only task was to sit back, relax, and watch a series of movie clips as their breath sounds were recorded. The cover story specifically did not mention laughter, thereby helping to ensure that any laugh sounds produced would be spontaneous and natural. Each participant was then asked to complete a short demographic and screening questionnaire, as well as to read and sign a consent form authorizing both audio-recording and subsequent use of audio recorded data. Participants were fitted with head-worn microphones, reminded that their only task was to relax and watch the movie clips, and left in the testing room. Once testing was completed, participants were debriefed as to the true nature of the study and read and signed a complete, more detailed consent form. Only one pair was tested during a given session.
Laugh selection and acoustic analysis
Following Bachorowski et al. (2001), laughter was defined relatively inclusively as being any perceptible vocal event that an ordinary person would characterize as a laugh sound. Two research assistants extracted laughter from the recordings, in each case comparing recording quality on the left and right channels and selecting the better of the two. Analysis was subsequently restricted to sounds that both assistants identified as laughter. Segments containing vocalizations that directly preceded, directly followed, or overlapped a laughter bout were excluded, as speech has been shown to alter the acoustic properties of laughter (Nwokah et al., 1999).
Again following Bachorowski et al. (2001), each laughter file was labeled at the bout and burst levels based on spectrographic representations (illustrated in Fig. 1). A bout was defined as one entire laughter episode, and a burst as a discrete sound (note, syllable, or call) occurring within that episode. Onset and offset times for bouts and bursts were marked with cursor-based labels by one of the two research assistants. Each burst was then labeled as being produced with an “open,” “closed,” or “mixed” mouth position, with either egressive or ingressive air flow. These determinations were made acoustically, for example, based on the presence or absence of the audible characteristics of nostril air flow for closed-mouth unvoiced sounds and the muffled quality associated with closed-mouth voiced sounds (also following Bachorowski et al., 2001). The mixed-mouth designation was used for bursts in which the laugher alternated between open and closed mouth positions. Sounds that could be produced through either open or closed mouth positions were considered ambiguous and were not assigned labels for this analysis.
Figure 1.
Narrow band spectrograms of laugh bouts produced (a) by a normally hearing female and (b) by a deaf female. Brackets mark examples of individual bursts classified as either voiced, a mix of voiced and unvoiced, or unvoiced. Spectrograms were created using a 11.025 kHz sampling rate and 0.03 s Gaussian analysis window.
All duration and classification labels were reviewed before conducting further analyses. We performed further reliability checks approximately 24 months after the first classification based on blind relabeling of a randomly selected sample of 10% of the analyzed bouts (as in Bachorowski et al., 2001). Correlating outcomes for duration (reflecting placement of onset and offset labels) showed this measure to be highly reliable (Pearson’s r=0.999). Percent agreement (88%) and high reliability (Cohen’s kappa, calculated in SAS) was also found for mouth position (κ=0.69). Agreement in labeling air flow direction was lower (64.6%), and reliability was requisitely more modest (κ=0.42).
Audio files were downsampled to 11.025 kHz prior to making acoustic measurements. Custom-written scripts operating based on onset and offset labels were used to automatically or semiautomatically extract bout durations, interburst durations, “raw” amplitudes, as well as F0- and formant-related measures at the burst level. All acoustic measures are listed and defined in Table 2. Relative burst amplitudes were then calculated as a ratio of absolute amplitude over the duration of the burst to the amplitude of the 700 Hz calibration tone recorded on the corresponding channel at the beginning of the session. Percentage-voicing outcomes were based on an automatic F0 extraction routine native in the XWAVES program and were used to classify each bout and burst as unvoiced, mixed, or voiced [see Fig. 1a]. An unvoiced sound was one containing 25% or less voicing, mixed sounds contained between 25% and 75% voicing, and voiced sounds had 75% or more voicing. Percentage-voicing values of bursts within each bout were used to compute mean percentage voicing at the bout level. Formant frequencies were extracted for all bursts produced by deaf laughers with sufficient voicing to allow this analysis and from a representative sample of bursts produced by hearing laughers. Formant measurement procedures followed those outlined in Bachorowski et al. (2001) and were based on formant-peak locations in linear predictive coding spectra (ten coefficients, 40 ms Hamming analysis window, autocorrelation method) overlaid on fast Fourier transform (FFT) spectra (40 ms, Hanning analysis window) that were computed over the same waveform segment (see also Owren and Bernacki, 1998).
Table 2.
Definitions (and unit labels) of the acoustic measures used.
| Analysis level | Measure | Definition |
|---|---|---|
| Bout (a laughter episode or series) | Duration | Time between bout onset and offset (s) |
| Burst (a continuous, discrete sound within a bout) | Duration | Time between burst onset and offset (s) |
| Fundamental frequency (F0) | Lowest-frequency harmonic in a quasiperiodic waveform (Hz) | |
| Formant frequency (F1,F2) | Center frequency of the two lowest formants, where a formant is a resonance of the vocal tract (Hz) | |
| Percentage voicing (% voicing) | Percentage of analysis frames in a burst from which an F0 value could be extracted using the xwaves software pitch-extraction algorithm (%) | |
| Raw amplitude | Mean root-mean-square (rms) value computed over the entire burst (dB) | |
| Relative amplitude | A normalized amplitude value derived by subtracting a burst’s raw rms (dB) amplitude from the rms (dB) value of a constant amplitude, 700 Hz calibration tone recorded on the same channel of the audio recorder using identical input settings |
Statistical comparisons relied primarily on repeated-measure ANOVAs using participant identity as the subjects factor, gender as a within-group factor, and hearing status as a between-group factor. Variance distribution analyses were also conducted for all measures, in order to ensure that group differences were not traceable to any one individual. Statistical comparisons at the bout level focused on voicing classification, percentage-voicing, duration, and burst-type composition. At the burst level, comparisons included mouth position, air-flow direction, and voicing classifications, as well as duration, interburst interval, and relative amplitude.
RESULTS
Bout-level outcomes
A total of 278 laughter bouts produced by deaf participants and 734 laughter bouts produced by hearing participants were analyzed. Table 3 provides descriptive statistics for bout-level analyses, shown by hearing status and gender. For bouts from deaf laughers, 78.4% were classified as unvoiced, 16.9% as mixed, and 4.7% as voiced. Similarly, 71.3% of bouts from hearing laughers were unvoiced, 25.7% were mixed, and 3.0% were voiced. The number of bouts of each type did not vary by hearing status or gender nor were there significant effects of these factors on mean percentage voicing per bout. However, not all participants produced laughter bouts in all three voicing categories.
Table 3.
Means and standard deviations (in parentheses) of measures used for bout-level analyses. Data are separated according to hearing status (deaf vs hearing) and gender (male vs female).
| Deaf males (n=7) | Hearing males (n=12) | |||||
|---|---|---|---|---|---|---|
| Total bouts (n) | 91 | 372 | ||||
| Bursts per bout (n) | 5.84 (6.95) | 1.57 (1.64) | ||||
| Percentage voicing (%) | 15.14 (29.11) | 19.05 (25.48) | ||||
| Bout type | Unvoiced | Mixed | Voiced | Unvoiced | Mixed | Voiced |
| Percentage by type (%) | 80.22 | 9.89 | 9.89 | 69.89 | 25.81 | 4.30 |
| Duration (s) | 2.32 (0.34) | 2.15 (1.20) | 1.53 (0.62) | 1.46 (1.50) | 2.02 (2.00) | 0.66 (0.36) |
| Deaf females (n=12) | Hearing females (n=11) | |||||
| Total bouts (n) | 187 | 362 | ||||
| Bursts per bout (n) | 1.92 (0.18) | 1.42 (1.32) | ||||
| Percentage voicing (%) | 14.21 (21.98) | 14.64 (20.35) | ||||
| Bout type | Unvoiced | Mixed | Voiced | Unvoiced | Mixed | Voiced |
| Percentage by type (%) | 77.54 | 20.32 | 2.14 | 72.65 | 25.69 | 1.66 |
| Duration (s) | 1.94 (2.55) | 2.10 (0.34) | 1.47 (1.23) | 1.27 (1.27) | 1.89 (1.35) | 0.65 (0.82) |
Duration analysis revealed that laughter bouts from deaf participants (M=2.03 s, SD=2.51) were significantly longer than laughter bouts from hearing participants (M=1.50 s, SD=1.50), F(1,1011)=5.06, p=0.03. This difference was also evident in separate analyses of unvoiced bouts, F(1,740)=6.39, p=0.016, but not mixed bouts, F(1,235)=0.03, p=0.86, or voiced bouts, F(1,34)=5.30, p=0.61. No significant differences were found in bout duration by gender.
Burst-level outcomes
Descriptive statistics for 1296 total bursts from the deaf group and 3461 total bursts from the hearing group are shown in Table 4. A clear majority of bursts were unvoiced (86.7% and 68.3% for laughter from deaf and hearing participants, respectively), a smaller proportion were mixed (7.1% and 20.8%), and many fewer were voiced (6.3% and 11.0%). Deaf laughers produced a significantly higher proportion of unvoiced bursts, F(1,4757)=5.34, p=0.03, and fewer mixed bursts, F(1,4757)=11.14, p=0.002, than did hearing laughers. Gender had no effect on the proportion of each type, either for unvoiced, mixed, or voiced bursts. All participants produced unvoiced bursts, 16 of 18 deaf and 22 of 23 hearing participants produced mixed bursts, and 9 deaf and 22 hearing participants produced voiced bursts. Individuals who did not produce mixed bursts also produced no voiced bursts.
Table 4.
Means and standard deviations (in parentheses) of measures used for burst-level analyses. Data are separated according to hearing status (deaf vs hearing) and gender (male vs female).
| Deaf males (n=7) | Hearing males (n=12) | |||||
|---|---|---|---|---|---|---|
| Total bursts (n) | 531 | 1730 | ||||
| Duration (s) | 0.21 (0.23) | 0.22 (0.17) | ||||
| Burst type | Unvoiced | Mixed | Voiced | Unvoiced | Mixed | Voiced |
| Percentage by type (%) | 85.31 | 5.02 | 9.60 | 66.47 | 21.27 | 12.26 |
| Duration (s) | 0.19 (0.22) | 0.28 (0.19) | 0.31 (0.28) | 0.23 (0.18) | 0.21 (0.14) | 0.16 (0.16) |
| %Voicing | 0.49 (2.76) | 46.45 (13.36) | 95.40 (5.80) | 2.36 (6.10) | 47.22 (15.93) | 90.97 (8.17) |
| Deaf females (n=12) | Hearing females (n=11) | |||||
| Total bursts (n) | 765 | 1731 | ||||
| Duration (s) | 0.27 (0.24) | 0.18 (0.20) | ||||
| Burst type | Unvoiced | Mixed | Voiced | Unvoiced | Mixed | Voiced |
| Percentage by type (%) | 87.58 | 8.50 | 3.92 | 70.02 | 20.34 | 9.64 |
| Duration (s) | 0.27 (0.20) | 0.29 (0.20) | 0.39 (0.69) | 0.20 (0.16) | 0.16 (0.11) | 0.14 (0.43) |
| %Voicing | 1.22 (4.28) | 45.88 (15.72) | 91.55 (7.81) | 1.47 (4.92) | 49.39 (15.10) | 90.40 (8.21) |
Overall, duration of bursts did not differ between the deaf (M=0.37 s, SD=0.74) and hearing groups (M=0.25 s, SD=0.49), nor was there a gender effect. However, these outcomes were influenced by the large proportion of unvoiced laughs recorded. When analyzed by category, duration of unvoiced bursts did not differ between groups, but deaf laughers produced both longer voiced, F(1,459)=6.40, p=0.018, and mixed bursts, F(1,811)=10.10, p=0.003. Interburst intervals were also longer in laughter from the deaf group (M=0.24 s, SD=0.51) than in laughter from the hearing group (M=0.16 s, SD=0.25), F(1,3731)=10.85, p=0.002. However, this outcome held only for intervals that followed unvoiced bursts, F(1,2610)=14.05, p=0.001. The rate of laughter production (i.e., number of bursts per seconds) varied by hearing status, F(1,1062)=14.27, p<0.001, with mean rates of 2.7 and 3.8 bursts∕s produced by deaf and hearing laughers, respectively. However, laughter rate did not differ by gender.
The mean relative amplitude of bursts produced by deaf participants (M=0.60 dB, SD=0.16) was significantly lower than that of hearing participants (M=0.74 dB, SD=0.21), F(1,4755)=20.37, p<0.001. No gender differences were found on this measure. Mean relative amplitudes were significantly lower in laughter produced by deaf participants than in hearing participants in the case of unvoiced bursts, F(1,3485)=22.62, p<0.0001, and voiced bursts, F(1,457)=4.77, p=0.038 bursts, but not for mixed bursts.
Laughers from each group were judged to produce bursts with open, closed, and mixed mouth positions, as well as both egressively and ingressively. For deaf laughers, 41.2% of bursts were produced with an open mouth, 56.2% with a closed mouth, and only 0.002% with a mixed mouth position. Mouth position was not acoustically evident in the remaining 2.6% of bursts from these individuals. Hearing laughers produced 33.0% open-mouth bursts, 64.4% closed-mouth bursts, 2.5% mixed-mouth bursts, and 8% were unidentifiable. The majority of bursts were deemed to show egressive air flow, including 67.3% of bursts from deaf laughers and 72.7% from hearing laughers. However, a substantial proportion of bursts was judged to be ingressive, including 22.1% from the deaf group and 17.0% from the hearing group. Most of the bursts judged to be ingressive were also unvoiced (approximately 86.3% for the deaf group and 75.3% for the hearing group). It was not possible to gauge air-flow direction from the acoustic evidence for 10.6% of the bursts from deaf laughers and 10.3% of the bursts from hearing laughers.
F0 and formant outcomes
F0 measurements were extracted from voiced parts of bursts containing over 1% voicing. Because the majority of bursts recorded were unvoiced, inclusion of bursts with less then 25% voicing served to increase the sample size. The resulting sample consisted of 108 bursts by deaf males (including 30 bursts with less then 25% voicing), 767 bursts by hearing males (including 248 bursts with less then 25% voicing), 175 bursts by deaf females (including 80 bursts with less then 25% voicing), and 647 bursts by hearing females (including 128 bursts with less then 25% voicing). Figure 2 shows means and standard deviations for mean (meanF0), minimum (minF0), and maximum (maxF0) F0 outcomes in relation to typical values for male and female speech as reported in the scientific literature.
Figure 2.
Means and standard deviations are shown for the meanF0, maxF0, and minF0 measures for laughter from deaf and normally hearing males and females. These results can be compared to approximate, normative F0 values of 120 and 220 Hz, respectively, for male and female speech (e.g., Baken and Orlikoff, 1999).
Not surprisingly, bursts produced by males (M=200.4 Hz, SD=82.9) showed significantly lower overall meanF0 values than did bursts produced by females (M=340.6 Hz, SD=112.6), F(1,1619)=21.13, p<0.0001. However, meanF0 values were also significantly lower in laughter produced by deaf (M=228.2 Hz, SD=82.2) versus hearing (M=275.2 Hz, SD=125.1) participants, F(1,1619)=6.51, p=0.015. While there was no difference in meanF0 between deaf males (M=179.8 Hz, SD=81.7) and hearing males (M=203 Hz, SD=82.8), values from deaf females (M=258.5, SD=66.6) were significantly below those of hearing females (M=360.4 Hz, SD=112.5), F(1,780)=9.46, p<0.006.
Results of analyses of maxF0 and minF0 values paralleled the findings for meanF0. While F0 values were somewhat lower in deaf males (maxF0: M=213.7 Hz, SD=91.4; minF0: M=150.0 Hz, SD=76.2) than in hearing males (maxF0; M=244.2 Hz, SD=107.3; minF0: M=164.8, SD=69.2), these differences were not statistically significant. Nonetheless, F0 values were clearly lower in deaf females (maxF0: M=294.6 Hz, SD=69.3; minF0: M=223.5 Hz, SD=76.2) than in hearing females (maxF0: M=395.4 Hz, SD=122.29; minF0M=319.9 Hz, SD=108.2). Statistical comparisons revealed that these differences were significant both for maxF0, F(1,780)=9.30, p=0.007, and for minF0, F(1,780)=7.87, p<0.011.
Formant-frequency measurements proved to be more problematic, producing a significantly smaller sample. The difficulties were traceable particularly to the relatively low amplitude of laughter produced by deaf participants. Low signal-to-noise ratios ruled out analysis of formants in unvoiced bursts, as well as eliminating a number of the voiced bursts in this modestly sized sample. Analyzing all bursts with 1% or more voicing yielded values for 69 bursts from five deaf males, 95 bursts from 15 hearing males, 80 bursts from 13 deaf females, and 36 bursts from six hearing females. Mean frequencies of the first (F1) and second (F2) formants were computed separately for each participant, with resulting grand means shown in Fig. 3. Statistical comparisons revealed no significant differences by hearing status or gender for either F1 or F2. While there was no indication of a difference between deaf and hearing males or between deaf and hearing females, the absence of an overall gender difference likely reflects the small sample sizes and large variances associated with some of these values.
Figure 3.
Mean frequency values of F1 and F2 for all analyzable bursts from deaf males, hearing males, deaf females, and hearing females.
More importantly, mean formant frequencies for these voiced laughter segments fell close to the expected F1 and F2 values for schwa sounds, which for males are approximately 500 and 1500 Hz, respectively, and approximately 590 and 1775 Hz for females. Individual voiced segments clustered around the center of American-English vowel space, as illustrated in [Fig. 4]. This figure shows open—[Fig. 4a] and closed-mouth (Figure 4b) bursts separately, plotted by F1 and F2 values. Superimposed ellipses redrawn from Hillenbrand et al. (1995) illustrate the range of variation associated with adult-male vowels ∕a∕, ∕i∕, and ∕u∕, which form the approximate boundaries of the vowel space (female values are somewhat higher). The observations of greatest interest are that the large majority of laugh sounds are centrally located, and that the variation that does occur is largely in F1 frequency. While these F1 values can fall outside the typical schwa range, contrasts in vowel-quality are considered to reflect the relationship between F2 and F1 frequencies, with both formants showing a wide range of values (e.g., Ladefoged, 2006).
Figure 4.
Mean frequency values of F1 and F2 for individual bursts from deaf and hearing laughers producing (a) open-mouthed laughs and (b) closed-mouthed laughs with over 1% voicing. Ellipses mark the approximate boundaries of vowel-space variation in normally hearing talkers of American English (after Hillenbrand et al., 1995).
DISCUSSION
This study provides an acoustic characterization of laughter produced by congenitally and profoundly deaf college students and normally hearing control participants. Overall, the results were similar to those previously reported for normally hearing adults (Bachorowski et al., 2001; Mowrer et al., 1987; Vettin and Todt, 2004). For example, both groups showed a high degree of acoustic variability, elevated mean F0’s, and formant values that were consistent with neutral, unarticulated vocal-tract positions. Some differences were also found, however, for instance associated with temporal, F0, amplitude, and percentage-voicing measures. The following sections elaborate on these findings and discuss their implications for laughter in deaf vocalizers. The overall conclusion is that auditory experience, whether with sounds produced by others or by the vocalizers themselves, is not necessary for laughter to emerge in deaf individuals.
Laughter in deaf and hearing participants
Similarities
Acoustic variability. Previous acoustic analyses have indicated that laughter is highly variable at both bout and burst levels, including degree of voicing (Bachorowski et al., 2001), mouth position (Bachorowski et al., 2001), air-flow direction (Bachorowski et al., 2001; Nwokah et al., 1999), and both temporal and F0 characteristics (Bachorowski et al., 2001; Mowrer et al., 1987). The laugh sounds recorded here were similarly variable, both replicating the earlier findings and extending those results to deaf laughers.
For example, both deaf and hearing participants produced bursts that could range from fully unvoiced to fully voiced. Some individuals in both groups produced bursts that were primarily of one type or another, while others produced sounds from all three categories. However, there was no reason to conclude that participants were limited in the degree of voicing they could potentially produce in their bursts. Most bouts and bursts produced by both deaf and hearing participants were unvoiced, a smaller number were mixed, and relatively few were purely voiced. This is the same order observed by Bachorowski et al. in their larger sample of normally hearing laughers, although higher proportions of voiced bouts and bursts were found in that study.
Similarly, bursts in both the deaf and hearing groups could be produced with open, closed, or mixed mouth positions and with either egressive or ingressive airflow. Mouth-position findings were similar to those reported by Bachorowski et al. (2001), although reliability was not quite as high in the current work as in this previous report. However, both studies have found most of the laughs being produced with the mouth closed, a significant number being produced with the mouth open, and only a few showing a mixed mouth position. Both deaf and hearing laughers scored here were found to not only produce a majority of egressive sounds, but also showed a nontrivial rate of ingressive sounds. This finding contrasts with some earlier studies arguing that ingressive laugh sounds are extremely rare (e.g., Provine and Yong, 1991) and may also be the first case in which air-flow direction has actually been coded and quantified. This outcome should also be interpreted cautiously, however, as reliability was lower than desired for this measure, and the validity of the judgments made could not be assessed. The outcomes are included mainly as an additional point of comparison for the hearing and deaf participants, as well as to highlight the need to specifically investigate this dimension of laughter production in the future work.
Laughter in both groups was also characterized by significant temporal variability at each level of analysis. At the bout level, laughter in deaf participants lasted a minimum of 0.08 s and a maximum of 24.6 s. Laughter bouts in hearing participants were as short as 0.04 s and as long as 12.4 s. At the burst level, laughs ranged from 0.004 to 3.06 s for deaf laughers and from 0.002 to 4.02 s for hearing laughers. Considerable variation was also present in temporal factors such as bout and burst durations, with interburst intervals being greater in both laugh samples than in previous reports (cf. Bachorowski et al., 2001; Mowrer et al., 1987). These discrepancies may at least partly reflect inter- and intraindividual differences in intensity of response to the particular stimuli used in the various studies (Mowrer, 1994).
In the current work, mean laugh-production rates for hearing participants (3.82 bursts∕s) were somewhat lower than those reported by Bachorowski et al. (2001) (4.37 bursts∕s), but nonetheless higher than reported rates of speech (3.26 syllables∕s) (Venkatagiri, 1999). Mean rates were lower among deaf laughers (2.82 bursts∕s), perhaps reflecting that vocal production rates are generally slower among deaf than among hearing vocalizers (Leder and Spitzer, 1993; Okalidou and Harris, 1999; Osberger and Levitt, 1979; Osberger and McGarr, 1983). Possible reasons for these findings are discussed below.
F0measures. Several previous studies (e.g., Bachorowski et al., 2001; Provine and Yong, 1991; Vettin and Todt, 2004) have found the mean F0 of laughter in hearing individuals to be much higher than the reported mean F0 of normative speech. In fact, large F0 range has been described as a defining characteristic of laughter (Mowrer et al., 1987). Laughter from deaf vocalizers was also characterized by large F0 ranges. At the burst level, mean F0 values for laughter produced by deaf females spanned 456.7 Hz. Similarly, mean F0 values for laugh bouts produced by deaf males spanned 460.9 Hz. However, ranges of mean F0 values were even greater among hearing laughers, being 742.4 Hz for female bursts and 550.0 Hz for male bursts.
In line with these characteristically large F0 ranges, mean F0’s in laughter have been reported to be more than a doubling of comparable values in speech (Mowrer et al., 1987). In the current study, mean F0’s were elevated in both deaf and hearing groups (see Fig. 2), although they were not twice as high as prototypical values from normative speech (e.g., 120 and 220 Hz for males and females, respectively; Baken and Orlikoff, 1999). Mean F0 values in the speech of deaf individuals are thought to be similar to that of hearing talkers, but also vary significantly by individual talker (e.g., Lane et al., 1997). Finding more or less comparable F0 values in the laughter of deaf and hearing males, but lower F0’s in the laughter of deaf versus hearing females is therefore somewhat difficult to interpret. Taken at face value, the outcomes could indicate that laughter in deaf females does not show the same degree of F0 increase as found in hearing females or in males overall. On the other hand, the observed difference could reflect chance effects of a high degree of individual variation, that deaf females were less engaged by the stimulus material used for laugh induction, or were showing a greater degree of damping or inhibition of their vocal responses to that material (see below).
Lack of articulation. Although the evidence is indirect, acoustic results for laughter from both deaf and hearing groups indicate an overall lack of supralaryngeal articulatory effects. For example, plotting the formants extracted from voiced laughs in F1-F2 vowel space (see Fig. 4) reveals close clustering of F2 values for both open- and closed-mouth versions. However, while F1 frequencies from closed-mouth laughter are similarly clustered, there is a greater variation in open-mouth F1 values. In vowel production, which has been extensively studied by using both direct and indirect methods, F2 frequency primarily reflects the front-to-back location of a vocal-tract constriction created by the tongue, whereas F1 frequency is largely determined by mouth-opening size and overall tongue height (e.g., Rosner and Pickering, 1994). Extending those principles to laughter, the overall location and the relative homogeneity in F2 values within the vowel space are consistent with an unconstricted vocal tract and concomitant schwalike auditory quality for both open- and closed-mouth versions.
These vowel-production principles can also make sense of finding that mouth position is critical for F1. On the one hand, keeping the mouth closed implies a raised jaw and little or no variation in tongue height or mouth opening [Fig. 4b]. F1 values should thus be relatively invariant for each individual laugher, and to a lesser degree, across individuals. On the other hand, lowering the jaw and parting the lips creates the possibility of significant variation in both tongue height and mouth-opening size. Given that each of these parameters can be expected to vary both within and among individual laughers, the heterogeneity observed for F1 values in open-mouth laughter [Fig. 4a] is also understandable. Overall, these outcomes are consistent with Bachorowski et al.’s (2001) formant-related findings for laughter from normally hearing vocalizers (see Fig. 6 in that article), and inconsistent with attributing a range of vowel qualities to these sounds (e.g., Provine, 2000; Provine and Yong, 1991).
Differences
Despite the overall similarity in acoustic properties of laughter shown by deaf and hearing participants, some differences were also found. These particularly included aspects of duration, amplitude, percentage voicing, and F0. In several cases, those differences may be traceable to deficiencies in laryngeal and oral muscle control resulting from the relatively low rates of vocalization occurring in profoundly deaf individuals. Although our deaf participants’ use of speech was not specifically investigated, all lived in a deaf community in which ASL is the primary form of language communication, and all reported ASL rather than spoken English to be their primary, native language.
Researchers investigating vocal production in deaf talkers have suggested that laryngeal and oral deficiencies can affect the temporal characteristics of their speech (LaPointe et al., 1990; Okalidou and Harris, 1999). Since speech and laughter utilize the same physical apparatus (Nwokah et al., 1999), vocal anatomy in the deaf participants tested here may have been affected by similar constraints. One possible example emerging in the results was the longer interburst intervals found in deaf laughers, which could conceivably be analogous to interphonemic and intersyllabic temporal distortions found in deaf talkers (Rothman, 1976). However, this phenomenon only occurred following unvoiced bursts, suggesting that the two phenomena are not directly related.
A more likely parallel was revealed in finding that deaf laughers produced longer-duration voiced bursts than did hearing laughers. Figure 1b shows two potentially relevant examples, the first being a sustained, creaky vowel sound occurring before the laugh begins (which was not scored as a laugh). This event is a likely analog to an unvoiced exhalation or inhalation that hearing vocalizers can routinely produce just before laughter onset, but with this deaf female’s vocal folds becoming engaged and vibrating somewhat irregularly. Later in the same bout, voiced bursts that would likely be separated in a hearing person’s laughter are connected by ongoing, and arguably artifactual, phonation. Here again, it appears that voicing is occurring at points where the vocal folds would have become disengaged in a hearing person. These phenomena may thus be mechanistically related to the slower, elongated vowels reported for deaf speech (Bakkum et al., 1995; Okalidou and Harris, 1999).
While a number of studies have reported that the mean speaking F0 of deaf individuals does not differ from that of hearing talkers (e.g., Waldstein, 1990), others have found F0 values to be higher in the former (e.g., Leder and Spitzer, 1993). These conflicting results may reflect the higher individual variability noted earlier for deaf talkers, who, for instance, show greater variability in their experience with speech (Lane et al., 1997). Current results showed the converse, namely, that the range of mean F0 values was smaller among deaf than among hearing laughers, differences of almost 200 and 100 Hz for females and males, respectively. The next section considers a likely cause, namely, that more purely social factors may have influenced the vocal behavior of deaf participants during testing.
Another possibility to consider is that laughter has been shown to occur in the context of language communication in both deaf signers and hearing talkers (e.g., Provine and Emmorey, 2006; Provine and Yong, 1991). While any signing that may have been produced by deaf participants in the current work would not have affected recovery of laugh sounds, that was not the case for the talking that sometimes occurred in hearing participants. For the latter, laugh sounds occurring in the context of speech were specifically excluded from analysis, which may have served to inflate the differences observed between deaf and hearing laughers on measures such as proportion of unvoiced versus voiced laughter, and F0 values. However, the proportions of voiced laughter documented for normally hearing listeners at both bout and burst levels remained below those reported by Bachorowski et al. (2001), which argues against the possibility that any such effects had a major influence on current outcomes. Nonetheless, the safest conclusion may be that the results reflect a conservative estimate of the degree of similarity documented between laughs from deaf and hearing participants.
Social factors
While temporal aspects of the laughter recorded from deaf vocalizers may have been affected by more organic factors, group differences in relative amplitude, percentage voicing, and F0 may also reflect that deaf participants were reacting less strongly than hearing participants to the humorous stimuli being presented (Mowrer et al., 1987). In other words, the material may not have been sufficiently funny to the deaf participants to elicit as much spontaneous laughter as shown by hearing individuals.
Another possibility is that the deaf participants may have been actively inhibiting their vocal responses. While amplitudes of infant vocalizations are reported not to differ for deaf and hearing babies (Oller et al., 1985), deaf adults report being actively concerned about inadvertently vocalizing too loudly and that they feel social pressure to avoid doing so (Leder and Spitzer, 1993). In addition, being unable to monitor their own production, many deaf individuals are self-conscious about the quality of their utterances and vocalizations, fearing that they may sound “funny” (Higgins, 1980, p. 94). In the current situation, this kind of social conditioning may have produced vocal suppression among the deaf participants. If so, likely effects would include not only lower-amplitude laughter, but also less voicing, and lower F0 values in bouts and bursts in which phonation did occur. Although only one of the deaf participants explicitly reported self-consciousness and concern in the testing situation, the participants may have been under-reporting such feelings or exhibiting unconsciously controlled vocal suppression.
Emotional “contagion” could also have played a role in either one or both of the testing situations. Previous research has shown that laughter occurs much more frequently in social than in solitary settings (Provine and Fischer, 1989), and that friends, in particular are likely to trigger laughter in one another by laughing (Smoski and Bachorowski, 2003a; 2003b). Participants were tested in pairs specifically to encourage the occurrence of laughter and also targeted same-sex friends as the dyads most likely to produce laughter under these sorts of laboratory circumstances (Owren and Bachorowski, 2003). Taking this approach thus introduced the possibility that some of the laughter might be catalyzed by the social nature of the situation rather than reflecting positive responses to the humor per se. In the deaf participants, the trigger could be seeing the testing partner laughing, while in the hearing participants, both the sight and the sound of laughter could be involved. It is at this point impossible to know whether deaf or hearing participants would be most affected, although it seems likely that contagion resulting from visual stimulation would be less than that from the combination of visual and auditory effects. If so, this difference could also have contributed to the deaf individuals experiencing less intense responses to the material presented than did hearing participants.
CONCLUSIONS
Overall, results of this study confirm that the occurrence of human laughter in fundamentally species-typical form does not depend on vocalizers having significant exposure either to laugh sounds or to other auditory input. Given the level of hearing impairment shown by our deaf participants, the sounds of their own laughter could also not be expected to provide sufficient auditory feedback to influence or tune the respiratory and vocal musculature involved in the production process. Acoustic analysis revealed that both deaf and hearing participants produced laughter, showing a number of critical features described by Mowrer et al. (1987) and Bachorowski et al. (2001). For example, neither group produced laughter that could be considered stereotyped (cf., Provine and Yong, 1991), instead exhibiting significant acoustic variability in these sounds. Furthermore, both kinds of participants produced unvoiced, mixed, and voiced laughter, with each group making these sounds with the mouth either open or closed. Both deaf and hearing vocalizers also produced more unvoiced than voiced laughter, and when voiced laughter did occur, the F0 values involved were significantly higher than expected in normative speech. Finally, neither group showed evidence of significant supralaryngeal articulation effects.
The study thus produced firm support for the characterization of laughter as a biologically grounded behavior, although providing only indirect evidence concerning the nature of the mechanism involved. Provine and Yong (1991) and Provine (2000) have likened laughter to the fixed- or stereotyped-action patterns proposed by ethologists to underlie behavior in many nonhuman species (see Eibl-Eibesfeldt, 1989). The current results are broadly compatible with that characterization, but also contradictory in confirming previous evidence of a high degree of acoustic variability in the behavior. This degree of variability would not be expected from an action-pattern-like control system, whose hallmark should instead be a marked degree of stereotypy. The variability observed here is furthermore not attributable to the overall absence of auditory experience in the deaf participants, given similar findings for hearing participants and in other studies (Bachorowski et al., 2001; Mowrer et al., 1987). Conversely, in light of the current results, the variability is also not attributable to auditory learning effects among the hearing.
The acoustic differences that were found between laughter in deaf and hearing participants appear more likely to be traceable to the effects of deafness on vocal-fold response properties and other aspects of vocal production (LaPointe et al., 1990; Okalidou and Harris, 1999) than to lack of auditory experience with laugh sounds. Social effects, such as suppression of spontaneous vocalization, are also likely to have influenced the sounds produced by the deaf participants. It will therefore be critical to address the issue of comfort level for deaf vocalizers during future recording work, which would be expected to yield laughs with significantly higher amplitudes and percentage-voicing scores. This problem dovetails with an issue that has not yet been adequately addressed for any nonverbal vocalization, namely, how overall arousal level affects acoustic realization of various sound types. An underlying assumption in the current work has been that greater response intensities produce higher-amplitude laughter, as well as a higher probability of voiced, rather than unvoiced, bursts and bouts. While intuitively plausible based on everyday experience, specific empirical testing is strongly needed.
Similarly, while the focus of this study was the acoustic properties of laughter, there is an evidence that specific social contexts can affect these sounds. Vettin and Todt (2004) have, for instance, proposed that laughter produced during social conversation is acoustically different from laugh vocalizations that are more specifically associated with humor and elicited under laboratory conditions. These differences are presumed not to be traceable to relative differences in vocalizer arousal level, an issue that remains to be examined in this circumstance as well. A further complication is that, paralleling effects with smiling, normally hearing humans are believed to be able to produce both spontaneous and more volitional versions of laughter (e.g., Keltner and Bonanno, 1997). In other words, in the course of acquiring volitional control over vocal production during infancy and early childhood (see Owren and Goldstein, 2008). developing humans are also likely to gain the ability to routinely produce simulated laugh sounds at appropriate moments in the furtherance of social motivations and goals. While likely a routine component of effective social interaction, volitional laughter is probably very difficult to separate empirically from truly spontaneous versions. Working with hearing-impaired laughers arguably provides a means of doing just that, specifically in cases where lack of auditory feedback and significant practice in volitional control of vocal production may rule out the occurrence of volitional laugh sounds.
Understanding the biological origins and normative features of human laughter thus requires disentangling a number of issues, including the nature of the human biological endowment underlying normative laughter, the role of physiological factors such as vocal-fold response and vocal-tract motor control, the impact of social proscriptions concerning vocal production, the influence of vocalizer arousal and emotional state, and the degree of spontaneity versus volitional control in a given instance of laughter production. The current study, which significantly strengthens the case for laughter as a biologically grounded, universal human behavior, may also prove helpful in eventually addressing these issues. While the work is only a first step in many ways, it nonetheless illustrates that laughter can be recorded from deaf vocalizers under controlled circumstances, and then fruitfully compared to sounds produced by normally hearing control participants. Refining and expanding the techniques involved may make this overall approach a potent tool in the larger endeavor of achieving a scientific understanding of human laughter.
ACKNOWLEDGMENTS
This work was supported in part by NIMH Prime Award No. 1 R01 MH65317-01A2, Subaward No. 8402-15235-X, by the Center for Behavioral Neuroscience, STC Program of the National Science Foundation under Agreement No. IBN-9876754, and by a grant from the Field of Psychology Graduate Student Research Award Fund at Cornell University to M.M.M. We thank Raylene Lotz, our project assistant at Gallaudet University, as well as Amy Chu, Danielle Inwald, Douglas Markant, and Maria Boresjsza-Wysocka, our project assistants at Cornell University. John Anderson, Erik Patel, three anonymous reviewers, and Christine Shadle provided valuable comments on earlier versions of this article.
Portions of this work were presented at the Human Behavior and Evolution Society conference, Austin, Texas, June 2005, and the American Society of Primatologists conference, Portland, Oregon, August 2005.
References
- Apte, M. L. (1985). Humor and Laughter: An Anthropological Approach (Cornell University, Ithaca: ). [Google Scholar]
- Bachorowski, J.-A., Smoski, M. J., and Owren, M. J. (2001). “The acoustic features of human laughter,” J. Acoust. Soc. Am. 10.1121/1.1391244 110, 1581–1597. [DOI] [PubMed] [Google Scholar]
- Baken, R. J., and Orlikoff, R. F. (1999). Clinical Measurement of Speech and Voice (Singular, New York: ). [Google Scholar]
- Bakkum, M. J., Plomp, R., and Pols, L. C. W. (1995). “Objective analysis versus subjective assessment of vowels pronounced by deaf and normal-hearing children,” J. Acoust. Soc. Am. 10.1121/1.413568 98, 755–762. [DOI] [PubMed] [Google Scholar]
- Black, D. W. (1984). “Laughter,” J. Am. Med. Assoc. 10.1001/jama.252.21.2995 252, 2995–2998. [DOI] [PubMed] [Google Scholar]
- Boersma, P. P. G. (2001). “Praat, a system for doing phonetics by computer,” Glot Int. 5, 341–345. [Google Scholar]
- Darwin, C. (1872). The Expression of Emotion in Man and Animals (Murray, London: ). [Google Scholar]
- Eibl-Eibesfeldt, I. (1989). Human Ethology (Aldine Transaction, New York: ). [Google Scholar]
- Edmonson, M. S. (1987). “Notes on laughter,” Anthro. Ling. 29, 23–33. [Google Scholar]
- Grammer, K., and Eibl-Eibesfeldt, I. (1990). “The ritualization of laughter,” Naturlichkeit der Sprache und der Kultur: Acta Colloquii, edited by Koch W. (Bochum, Brockmeyer: ), pp. 192–214. [Google Scholar]
- Hall, G. S., and Allin, A. (1897). “The psychology of tickling, laughing, and the comic,” Am. J. Psychol. 10.2307/1411471 9, 1–41. [DOI] [Google Scholar]
- Higgins, P. C. (1980). Outsiders in a Hearing World: A Sociology of Deafness (Sage, Beverly Hills: ). [Google Scholar]
- Hillenbrand, J., Getty, L. A., Clark, M. J., and Wheeler, K. (1995). “Acoustic characteristics of American English vowels,” J. Acoust. Soc. Am. 10.1121/1.411872 97, 3099–3111. [DOI] [PubMed] [Google Scholar]
- Hirson, A. (1995). “Human laughter—a forensic phonetic perspective,” Studies in Forensic Phonetics, edited by Braun A. and Kosten J.-P. (Wissenshaftlicher Verlag, Trier: ), pp. 77–86. [Google Scholar]
- Keltner, D., and Bonanno, G. A. (1997). “A study of laughter and dissociation: Distinct correlates of laughter and smiling during bereavement,” J. Pers Soc. Psychol. 73, 687–702. [DOI] [PubMed] [Google Scholar]
- Kipper, S., and Todt, D. (2003). “The role of rhythm and pitch in the evaluation of human laughter,” J. Nonverb. Beh. 27, 255–272. [Google Scholar]
- Ladd, P. (2003). Understanding Deaf Culture. In Search of Deafhood (Multilingual Matters, Toronto: ). [Google Scholar]
- Ladefoged, P. (2006). A Course in Phonetics (Thomson Wadsworth, New York: ), 5th ed. [Google Scholar]
- Lane, H., Wozniak, J., Matthies, M., Svirsky, M., Perkell, J., O’Connell, M., and Manzella, J. (1997). “Changes in sound pressure and fundamental frequency contours following change in hearing status,” J. Acoust. Soc. Am. 10.1121/1.418245 101, 2244–2252. [DOI] [PubMed] [Google Scholar]
- LaPointe, L. L., Mowrer, D. M., and Case, J. L. (1990). “A comparative acoustic analysis of the laugh responses of 20 and 70 year old males,” Int. J. Aging Human Behav. 31, 1–9. [DOI] [PubMed] [Google Scholar]
- Leder, S. B., and Spitzer, J. B. (1993). “Speaking fundamental frequency, intensity, and rate of adventitiously profoundly hearing-impaired adult women,” J. Acoust. Soc. Am. 10.1121/1.406677 93, 2146–2151. [DOI] [PubMed] [Google Scholar]
- Mowrer, D. E. (1994). “A case study of perceptual and acoustic features of an infant’s first laugh utterances,” Humour 7, 139–155. [Google Scholar]
- Mowrer, D. E., LaPointe, L. L., and Case, J. (1987). “Analysis of five acoustic correlates of laughter,” J. Nonverb. Beh. 11, 191–200. [Google Scholar]
- Nwokah, E. E., Hsu, H.-C., Davies, P., and Fogel, A. (1999). “The integration of laughter and speech in vocal communication: A dynamic systems perspective,” J. Speech Lang. Hear. Res. 42, 880–894. [DOI] [PubMed] [Google Scholar]
- Nwokah, E. E., Davies, P., Islam, A., Hsu, H.-C., and Fogel, A. (1993). “Vocal affect in three-year-olds: A quantitative acoustic analysis of child laughter,” J. Acoust. Soc. Am. 10.1121/1.407242 94, 3076–3090. [DOI] [PubMed] [Google Scholar]
- Okalidou, A., and Harris, K. S. (1999). “A comparison of intergestural patterns in deaf and hearing adult speakers: Implications from an acoustic analysis of disyllables,” J. Acoust. Soc. Am. 10.1121/1.427064 106, 394–410. [DOI] [PubMed] [Google Scholar]
- Oller, D. K., Eilers, R. E., Bull, D. H., and Carney, A. E. “Prespeech vocalizations of a deaf infant: A comparison with normal metaphonological development,” J. Speech Lang. Hear. Res. 28, 47–63. [DOI] [PubMed] [Google Scholar]
- Osberger, M. J., and Levitt, H. (1979). “The effect of timing errors on the intelligibility of deaf children’s speech,” J. Acoust. Soc. Am. 10.1121/1.383552 66, 1316–1324. [DOI] [PubMed] [Google Scholar]
- Osberger, M. J., and McGarr, N. S. (1983). “Speech production characteristics of the hearing,” Speech and Language: Advances in Basic Research and Practice edited by Lass N. (Academic, New York: ), pp. 221–283. [Google Scholar]
- Owren, M. J., and Bachorowski, J.-A. (2003). “Reconsidering the evolution of nonlinguistic communication: The case of laughter,” J. Nonverb. Behav. 27, 183–200. [Google Scholar]
- Owren, M. J., and Bernacki, R. H. (1998). “Applying linear predictive coding (LPC) to frequency-spectrum analysis of animal acoustic signals,” Animal Acoustic Communication: Sound Analysis and Research Methods, edited by Hopp S. L., Owren M. J., and Evans C. S. (Springer-Verlag, Berlin: ), pp. 129–161. [Google Scholar]
- Owren, M. J., and Goldstein, M. H., (2008). “The babbling-scaffold hypothesis: Subcortical primate-like circuitry helps teach the human cortex how to talk,” Evolution of Communicative Flexibility: Complexity, Creativity, and Adaptability in Human and Animal Communication, edited by Oller D. K., and Griebel U. (MIT, Cambridge: ), pp. 169–192. [Google Scholar]
- Padden, C. A., and Humphries, T. L. (1990). Deaf in America: Voices from a Culture (Harvard University, Cambridge: ). [Google Scholar]
- Provine, R. R. (1993). “Laughter punctuates speech: Linguistic, social, and gender contexts of laughter,” Ethology 95, 291–298. [Google Scholar]
- Provine, R. R. (2000). Laughter: A Scientific Investigation (Viking, New York: ). [Google Scholar]
- Provine, R. R., and Emmorey, K. (2006). “Laughter among deaf signers,” J. Deaf Stud. Deaf Educ. 11, 403–409. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Provine, R. R., and Fischer, K. R. (1989). “Laughing, smiling, and talking: Relation to sleeping and social context in humans,” Ethology 83, 295–305. [Google Scholar]
- Provine, R. R., and Yong, Y. L. (1991). “Laughter: A stereotyped human vocalization,” Ethology 89, 115–124. [Google Scholar]
- Rankin, A. M., and Philip, P. J. (1963). “An epidemic of laughing in the Bukoba district of Tanganyika,” Cent Afr. J. Med. 12, 167–170. [PubMed] [Google Scholar]
- Rosner, B. S., and Pickering, J. B. (1994). Vowel Perception and Production (Oxford University, Cambridge: ). [Google Scholar]
- Rothman, H. B. (1976). “A spectrographic investigation of consonant-vowel transitions in the speech of deaf adults,” J. Phonetics 4, 129–136. [Google Scholar]
- Savithri, S. R. (2000). “Acoustics of laughter,” J. Acoust. Soc. India 28, 233–238. [Google Scholar]
- Scheiner, E., Hammerschmidt, K., Jürgens, U., and Zwirner, P. (2002). “Acoustic analyses of developmental changes and emotional expression in the preverbal vocalizations of infants,” J. Voice 10.1016/S0892-1997(02)00127-3 16, 509–529. [DOI] [PubMed] [Google Scholar]
- Scheiner, E., Hammerschmidt, K., Jürgens, U., and Zwirner, P. (2004). “The influence of hearing impairment on preverbal emotional vocalizations of infants,” Folia. Phoniatr. Logop. 56, 27–40. [DOI] [PubMed] [Google Scholar]
- Smoski, M. J., and Bachorowski, J.-A. (2003a). “Antiphonal laughter in developing friendships.” Ann. N.Y. Acad. Sci. 10.1196/annals.1280.030 1000, 300–303. [DOI] [PubMed] [Google Scholar]
- Smoski, M. J., and Bachorowski, J.-A. (2003b). “Antiphonal laughter between friends and strangers.” Cogn. Emot. 17, 327–340. [DOI] [PubMed] [Google Scholar]
- Sroufe, L. A., and Wunsch, J. P. (1972). “The development of laughter in the first year of life,” Child Dev. 10.2307/1127519 43, 1326–1344. [DOI] [PubMed] [Google Scholar]
- Svebak, S. (1975). “Respiratory patterns as predictors of laughter,” Psychophysiology 12, 62–65. [DOI] [PubMed] [Google Scholar]
- Venkatagiri, H. S. (1999). “Clinical measurements of rate of reading and discourse in young adults,” J. Fluency Disorders 24, 209–226. [Google Scholar]
- Vettin, J., and Todt, D. (2004). “Laughter in conversation: Features of occurrence and acoustic structure,” J. Nonverb. Behav. 28, 93–115. [Google Scholar]
- Waldstein, R. S. (1990). “Effects of postlingual deafness on speech production: Implications for the role of auditory feedback,” J. Acoust. Soc. Am. 10.1121/1.400107 88, 2099–2114. [DOI] [PubMed] [Google Scholar]




