Abstract
Study Objective:
Investigate the impact of sleep deprivation on vocal expression of emotion.
Design:
Within-group repeated measures analysis involving sleep deprivation and rested conditions.
Setting:
Experimental laboratory setting.
Patients or Participants:
Fifty-five healthy participants (24 females), including 38 adolescents aged 11-15 y and 17 adults aged 30-60 y.
Interventions
A multimethod approach was used to examine vocal expression of emotion in interviews conducted at 22:30 and 06:30. On that night, participants slept a maximum of 2 h.
Measurements and Results:
Interviews were analyzed for vocal expression of emotion via computerized text analysis, human rater judgments, and computerized acoustic properties. Computerized text analysis and human rater judgments indicated decreases in positive emotion in all participants at 06:30 relative to 22:30, and adolescents displayed a significantly greater decrease in positive emotion via computerized text analysis relative to adults. Increases in negative emotion were observed among all participants using human rater judgments. Results for the computerized acoustic properties indicated decreases in pitch, bark energy (intensity) in certain high frequency bands, and vocal sharpness (reduction in high frequency bands > 1000 Hz).
Conclusions:
These findings support the importance of sleep for healthy emotional functioning in adults, and further suggest that adolescents are differentially vulnerable to the emotional consequences of sleep deprivation.
Citation:
McGlinchey EL; Talbot LS; Chang KH; Kaplan KA; Dahl RE; Harvey AG. The effect of sleep deprivation on vocal expression of emotion in adolescents and adults. SLEEP 2011;34(9):1233-1241.
Keywords: Sleep deprivation, adolescents, positive emotion, negative emotion, vocal expressi
INTRODUCTION
Mounting evidence from the literature suggests a critical role for sleep in emotional functioning.1 For example, healthy adult participants whose sleep was restricted to 5 h per night over one week reported a progressive increase in negative emotion with each night of sleep restriction.2 Furthermore, sleep deprivation appears to differentially impact positive versus negative emotions. Franzen, Siegle, and Buysse reported that sleep loss was associated with increases in negative emotion and with decreases in positive emotion when compared to individuals with no sleep loss.3 In another study, Zohar and colleagues reported a similar pattern of decreased positive emotion in the context of a goal-enhancing event and increased negative emotion in the context of a goal-disrupting event when medical residents were sleep deprived.4 The current study aims to further investigate the relationship between sleep loss and positive and negative emotions with a particular focus on comparing adolescents to adults.
Given the research suggesting a critical role for sleep in emotional functioning,2,3 it is of great concern that nearly half of America’s teenagers report regular insufficient sleep and excessive daytime sleepiness, particularly during school days.5 Late-night bedtimes combined with early school start times contribute to what is increasingly regarded as an epidemic of sleep deprivation in adolescents. Previous correlational studies have documented an association between self-reported lack of sleep and poor emotional functioning.–7 For example, in a recent large survey of adolescent sleep habits (n = 1602 7th–12th graders), Carskadon et al. reported that 45% of adolescents experienced insufficient sleep on school nights and 28% complained they often feel “irritable and cranky” as a consequence of not getting enough sleep.5 The current study sought to extend this literature by experimentally examining the relationship between sleep deprivation and emotion in adolescents. Furthermore, emotion was measured through objective methods rather than rely on self-report.
Adolescents are struggling with the burdens of sleep deprivation in addition to the inevitable social and biological changes that occur at the onset of puberty. Many of the hormonal, neural, and cognitive systems thought to underlie the regulation of emotion mature throughout the adolescent period,8 and the prevalence of various forms of psychopathology, including emotional and behavioral disorders, increases dramatically during adolescence.9 Hence, identifying the mechanisms by which adolescents develop and maintain emotional problems defines an important public health priority. A goal of the present study was to delineate one possible modifiable mechanism by which a critical, but understudied, feature of adolescent emotion difficulties might be maintained; namely, sleep deprivation.
There are many methods currently used in the study of emotion. However, it has long been argued that vocal expression is one of the most direct windows into an individual’s feelings and emotions.10 Indeed, many have argued that study of the voice is a more efficient and effective method for studying emotion than facial coding or physiological methods.11–15 Juslin and Scherer, in their work on vocal expression of emotion, have reminded us of the well-known phrase, “It’s not what she said, it’s how she said it” (p. 66).11 Several methods involving systematic coding of vocal expression have been developed, ranging from contextual human judgments12,13 to computer-based methods that lack the analysis of context.14,15 It is important to consider each method when studying vocal expression of emotion. Computer-based methods are objective and efficient, but contextual nuance is disregarded. Human judgments can take context into account, but accurate identification of discrete emotions can vary considerably.11 In the present study, we utilized a multimethod approach that includes both computer-based approaches and contextual human judgments. Computerized text analysis using the Linguistic Inquiry and Word Count (LIWC) program14 has been extensively studied in the context of words used in a therapeutic setting, and this method of analysis has been effective at identifying emotionally salient speech.15 Human rater judgments are commonly used in studies of vocal expression of emotion, and reasonable interrater reliability has been established for many basic emotions.16,17 Finally, previous research on the acoustic properties of vocal expression of emotion has yielded numerous associations between variations in vocal properties and specific emotions.11,18,19 In the current study, all 3 methods were used to study vocal expression of emotion.
As part of a larger 2-day study on affect and sleep deprivation in adolescents compared to adults, the current study focused on vocal expression of emotion on one of the nights of sleep deprivation. A multimethod approach was used, specifically utilizing: (a) computerized text analysis of emotion content in vocal expression, (b) human rater judgments of emotion in vocal expression, and (c) computerized acoustic properties of vocal expression. Based on past research,3,4 we hypothesized that vocal expression of positive emotions would decrease and vocal expression of negative emotions would increase after sleep deprivation, relative to when rested. The second aim was to determine if sleep deprivation affects vocal expression of emotion differently for adolescents relative to adults. We predicted that the hypothesized emotional effects of sleep deprivation (in hypothesis 1) would be greater for adolescents relative to adults.
METHODS
Participants
Fifty-five healthy participants completed the study. Thirty-eight adolescents (15 female) aged 11-15 years and 17 adults (9 female) aged 30-60 years were recruited from flyers posted in the community and in online advertisements. Individuals aged 16-30 years were excluded for 2 primary reasons: (1) to provide a clear neurodevelopmental difference between the adult and adolescent groups, as the prefrontal cortex is still developing at least until age 25,20 and (2) to ensure clear differentiation of sleep patterns between the adult and adolescent groups by avoiding the delayed sleep phase that can often occur during the late teens and early 20s.21 Adults older than 60 years were excluded because of age-related changes in the sleep-wake cycle.22 The inclusion criteria were (a) no medical conditions; (b) no history of head trauma; (c) not meeting criteria for any major past or current sleep disorder according to the Duke Structured Interview for Sleep Disorders (DSISD)23; and (d) not meeting Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV-TR)24 criteria for any past or current Axis I disorder according to the Structured Clinical Interview for DSM-IV (SCID)25 in adult cases and according to the Kiddie Schedule for Affective Disorders and Schizophrenia Present and Lifetime version (K-SADS-PL)26 in adolescent cases. Adolescent participants completed the self-rating scale for pubertal development in order to measure pubertal status.27
Measures
Stanford Sleepiness Scale (SSS)
The SSS is a 1-item measure of subjective sleepiness.28 Response options range from 1 “feeling active, vital, alert, and wide awake” to 7 “no longer fighting sleep, sleep onset soon; having dream-like thoughts.”
Speak freely interview
This method was adapted from Halford et al.29 and Harvey et al.30 Participants were asked the following 4 questions by a trained research staff member, who requested they spend one minute answering each question. The instructions given were as follows:
“I am going to be asking you some questions about how you are feeling right now. I want you to talk for one minute on each question and I’m not going to interrupt you. Any questions? (Pause to answer any questions.) OK. Now let’s begin.”
The questions for each time period were as follows:
22:30:
How are you feeling right now?
What are you looking forward to tonight?
How do you expect you’ll feel without sleep?
Is there anything you’re not looking forward to?
06:30:
How are you feeling right now?
What are you looking forward to today?
How do you expect you’ll feel the rest of this morning without sleep?
Is there anything you’re not looking forward to?
If the participant finished speaking prior to the end of one minute and did not recommence within 5 sec, the interviewer was trained to administer the following prompt 3 times: “Is there anything else you can think of?” If the participant ceased speech and did not recommence within 5 sec, the interviewer went on to the next question or the interview stopped. The entire interview was recorded via video recorder with built-in microphone (iSight camera by Apple Inc.). Interviews were recorded at 44.1 kHz sampling frequency. Recording conditions were kept consistent across all participants during all interviews. Each participant was seated directly in front of the camera, 2 feet away. The camera was kept in a marked location on a table in the center of the room for all recordings. Participants were instructed to sit at the same position in a fixed chair for each recording. The recording room was 25 × 33 square feet, and there were no objects in between the participants and the camera. There were some tables in the room, but these were kept the same throughout all recordings. The room had good acoustics to provide a clean audio sample. Although the room was not sound-attenuated, it was located in a quiet corner of the building, and there was no foot traffic or noise in the building during recording hours. Furthermore, there was no other electronic equipment that might have contributed to ambient noise.
Computerized text analysis of emotion content in vocal expression
Each participant’s interview was transcribed after deleting all prompts spoken by the interviewer. The Linguistic Inquiry and Word Count program (LIWC)14 was used to analyze the content of each participant’s interview. The LIWC analyzes speech on a word-by-word basis. It consists of a master dictionary with over 2,200 words and word stems. Each word is allocated to sub-dictionaries based on word categories such as “positive emotions.” For each participant, the LIWC counts the number of words that match 87 categories. Each category is then assigned a percentage score out of the total number of words used. For example, if a participant said 10 words from the positive emotions word category and spoke a total of 100 words, then that participant’s score for positive emotions would be 10%. In the present study, we were specifically interested in expression of positive and negative emotion. We therefore analyzed only the 2 categories that pertained to emotion: “Positive Emotions” and “Negative Emotions.” (For a complete list of categories, see Pennebaker et al.4)
Human rater judgments of emotion in vocal expression
Five positive (happy, energetic, interested, excited, and content) and 5 negative (frightened, contempt, anxious, bored, and gloomy) vocal expressions of emotion were analyzed through the audio recordings. These emotion displays were chosen using the guidelines reviewed by Juslin and Scherer11 on “choosing affective states” (p. 93) in the study of vocal expression. Starting from the list, the “Frequency of occurrence of 89 affect terms in 104 studies of vocal affect expression” (p. 94), we narrowed the emotions down to these 10 on the basis of 2 criteria: (1) that each emotion can be clearly defined and easily differentiated,11,13 and (2) that some of the emotions are adjectives from the PANAS-C,31 a self-report measure of positive and negative affect widely used in adolescent research. For each emotion, raters were required to code 1 of 5 intensity levels: (1) very slightly or not at all, (2) a little, (3) moderately, (4) quite a bit, and (5) extremely. This intensity scale is identical to the scale used in the PANAS-C.31 In order to capture emotional fluctuation, each recording was coded in 5-sec segments, and each segment was coded with all 10 emotion displays. Global positive and negative emotion composites were computed by averaging scores for the 5 positive emotions and the 5 negative emotions for each participant.
Eight human raters were trained to complete the coding. Raters were trained and tested for consistency prior to coding based on methods used in previous research.13,17 Each rater was responsible for both the 22:30 and 06:30 interviews of 7 participants. In order to reduce bias, raters were not given information about the specific aims of the study and were simply told that they were rating interviews conducted at 2 different times. Each rater then coded a second subset of participant interviews, so that interrater reliability could be checked. For positive emotions, interrater reliability for this subset was high, with intraclass correlations (ICC)32 for absolute agreement between coders ranging from 0.70 to 0.84. For negative emotions, interrater reliability was low, with ICCs ranging from 0.15 to 0.34. These correlations are consistent with previous research on vocal coding accuracy of these particular emotions.13
Computerized acoustic properties of vocal expression
While a great deal of research has focused on acoustic properties as measures of emotion, 11,33 minimal research has leveraged this method to investigate emotion in sleep deprivation. The vocal properties investigated were selected from Juslin and Scherer’s summary of properties correlated with emotion.11 A total of 30 features were extracted in the categories of fundamental frequency, jitter, intensity, shimmer, speech rate, pauses, and high frequency energy.
Fundamental frequency (F0) is a measure of pitch and is represented by the rate (1/sec) at which the vocal folds open and close. The unit of measurement for F0 is cycles per second or hertz (Hz). A sudden increase in fundamental frequency is associated with high activation emotion such as anger, whereas a low rate in fundamental frequency is interpreted as low energy or sadness.11 The dynamics of the fundamental frequency contour were calculated by several statistical measures: average (F0_avg), standard deviation (F0_std), minimum (F0_min), maximum (F0_max), and range (F0_range).
Jitter is pitch perturbation and is represented by small-scale rapid and random fluctuations of F0, meaning fluctuations of the opening and closing of the vocal folds from one vocal cycle to the next. Previous research suggests that jitter is an indicator of stressor-provoked anxiety.18 Two methods were applied to calculate jitter in this study, (1) by calculating the average of the first-order difference sequence in F0 (F0_jitter_PF), and (2) by calculating the average of the difference sequence over the mean of running F0 values (rather than over the preceding F0 value) with different cycle lengths (F0_jitter_PQ_mean).
Intensity reflects the energy (in dB) in acoustic signal or loudness of speech. Previous research suggests that a rapid rise in intensity is associated with angry speech and sad speech is characterized by low intensity.11 Several statistical measures were applied in order to describe the dynamics of intensity including Energy average (Energy_avg), standard deviation (Energy _std), minimum (Energy _min), maximum (Energy _max), and range (Energy _range). Intensity can also be analyzed by interpreting its distribution over frequency bands (i.e., spectrogram). Previous research suggests that an emphasis on loudness of psycho-acoustical barks in certain high frequency energy bands may be indicative of emotional speech.34 Specifically, the following energy values (in dB) in high frequency energy bands with bark scales were processed: energy_bark7 at 700-840 Hz, energy_bark8 at 840-1000 Hz, energy_bark9 at 1000-1170 Hz, energy_bark10 at 1170-1370 Hz, energy_bark11 at 1370-1600 Hz, energy_bark12 at 1600-1850 Hz, energy_bark13 at 1850-2150 Hz, energy_bark14 at 2150-2500 Hz, energy_bark15 at 2500-2900 Hz, and energy_bark16 at 2900-3400 Hz.
Shimmer is the loudness perturbation in speech and is measured by the small variations of energy amplitude in successive glottal cycles. Shimmer can serve as an indicator of underlying stress in human speech.18 Two features were calculated to describe shimmer: (1) Loud_shimmer_PF, which is the average of the first order difference sequence, and (2) Loud_shimmer_PQ_mean, which is the average of the difference sequence over the mean of running energy values (rather than over the preceding energy value) with different cycle lengths.
For the temporal aspects of speech, we included measures to describe speech rate and pauses. Previous research indicates that sadness often results in slower speech and more pauses.34 Both speech rate and pauses were calculated by measuring the voiced sections (F0 > 0) in speech. Speech rate was represented by the relative ratio of voiced versus unvoiced sections (ratio_voiced_over_unvoiced). Pauses were calculated by counting unvoiced sections (silence_voiced_count) and summing the total duration (in seconds) of silence (unvoiced sections, silence_duration).
As the amount of high-frequency energy increases, the voice sounds sharp and less soft,11 which can also be emotion-dependent. Therefore, we analyzed the amount of high-frequency energy in the spectrogram by calculating the cumulative values (in dB) in the spectrogram that appeared above 2 cut-off frequency thresholds: 500 Hz (HF500) and 1000 Hz (HF1000). In addition, the trend of high-frequency energy distribution (Slope1000) was calculated by the linear regression of the energy distribution in the frequency over 1000 Hz.
All properties were extracted from the digital audio recordings via the MATLAB platform based on methods used by Moore et al.35 and Fernandez and Picard.34 Specifically we applied Moore’s implementation to find intensity and Fernandez’s implementation to find jitter, shimmer, speech rate, and high frequency energy. In addition, we used the Praat speech analysis software to extract fundamental frequency.36 Praatis a computer program commonly used for acoustic analysis of vocal expression in clinical and research settings.
Procedure
The study protocol was approved by the Institutional Review Board of the University of California, Berkeley. Adult participants and adolescent parents or guardians were told about the procedures of the study and gave informed consent. All adolescent participants signed written informed assent. The SCID or K-SADS and the DSISD were administered by doctoral student interviewers with previous experience conducting structured clinical interviews. In the case of discrepant child and parent reports, the participant was rendered ineligible for the study protocol if any Axis I psychopathology or sleep disorder was endorsed by either parent or child. Additionally, during this visit adolescents completed the self-rating scale for pubertal development.27 If criteria for sleep and psychological health were met, participants were invited back to the Sleep and Psychological Disorders Laboratory for an overnight sleep deprivation protocol. All participants were compensated for their time.
During the week before the sleep deprivation protocol, participants were asked to keep a detailed sleep diary (for an average of 5 days) in their home environment while they maintained their usual sleep/wake schedules. The sleep diary was completed by 91% of participants. A portion of all participants (67%) also wore actigraphs (Actiwatch AW64; MiniMitter, Bend, OR) during this period. Due to availability constraints, not all participants received an actigraph. There was a significant correlation between sleep diary reported total sleep time and the objective actigraphy estimates of total sleep time (r = 0.53, P = 0.001).
The sleep deprivation protocol occurred over 2 nights. On the first night, participants were asked to restrict their sleep to a maximum of approximately 6.5 h at home. Compliance was checked using actigraphy or sleep diary data. Actiwatch or sleep diary data were available for 93% of participants. Participants came to the laboratory on the second night at 22:00. At 22:30, a baseline SSS rating was completed and the Speak Freely Interview Procedure was administered. Participants were then continuously monitored throughout the night by trained laboratory staff. They were permitted to interact with the laboratory staff in order to ensure wakefulness, as well as to read, watch movies, and play board games. A small snack, such as fruit or crackers, was made available by the laboratory staff. No caffeine or other stimulants were allowed. Between 03:00 and 05:00, participants were given a 2-h nap opportunity. After waking, participants had a breakfast consisting of fruit, crackers, yogurt, and cheese. At 06:30, the SSS and Speak Freely Interview were repeated.
Overview of Analyses
Group differences in demographic and sleep variables were analyzed with t-tests or ×2 tests. For the computerized text analysis, human rater judgments, and computerized acoustic properties analysis, we used repeated-measures analysis of variance (ANOVA) to test our hypotheses. In these analyses, the primary between-subjects variable was Group (adolescent, adult), and the within-subjects variable was Time (22:30, 06:30). When differences between groups emerged, we conducted follow-up t-tests to examine the source of difference. All P values reported are 2-tailed.
RESULTS
Participant Characteristics
Demographic variables are reported in Table 1. There were no significant differences between the adolescent and adult groups on gender or race/ethnicity. There were expected group differences in age, education level, and pubertal status. For household income, the adolescent group families had significantly higher household income than the adult group. There was no correlation between income level and any of the outcome variables of interest.
Table 1.
Participant demographics and sleep characteristics
| Adolescents (n = 38) | Adults (n = 17) | Test Statistic | |
|---|---|---|---|
| Female | 40% | 53% | x2 = 0.89 |
| Mean Age (SD) | 13.2 (1.5) | 38.5 (8.8) | t = 11.77** |
| Age Range | 11–15y | 30–60y | |
| Ethnicity (n) | x2 = 4.32 | ||
| Asian | 9 | 5 | |
| Caucasian | 22 | 9 | |
| Latino/a | 2 | 0 | |
| African American | 4 | 3 | |
| Other | 1 | 0 | |
| Mean Years Education (SD) | 8 (1.6) | 16.4 (2.3) | t = 13.16** |
| Mean Range Household Income | 75K–100K | 25K–50K | x2 = 14.73* |
| Pubertal Status | N/A | ||
| Pre-Pubertal | 0 | N/A | |
| Early-Pubertal | 4 | N/A | |
| Mid-Pubertal | 18 | N/A | |
| Late-Pubertal | 12 | N/A | |
| Post-Pubertal | 2 | N/A | |
| Mean TST: Week average at home (SD) | 7.4 (0.85) | 6.6 (1.19) | t = 1.86 |
| Mean TST: Night prior to lab visit (SD) | 6.3 (1.55) | 5.8 (1.11) | t = 1.23 |
| Mean SSS: 22:30 (SD) | 2.39 (0.97) | 2.63 (1.20) | t = 0.74 |
| Mean SSS: 06:30 (SD) | 4.32 (1.38) | 4.94 (1.48) | t = 1.48 |
SD, standard deviation; TST, total sleep time; SSS, Stanford Sleepiness Scale.*P < 0.01,**P < 0.001.
Sleep Characteristics
Mean total sleep time (TST) for the week preceding the overnight lab visit indicated that adolescent participants slept an average of 7.4 h each night, and adult participants slept an average of 6.6 h. There were no differences between the adolescents and adults for habitual TST (see Table 1).
We also examined average TST for the night just prior to attending the lab. Recall that all participants were asked to restrict their sleep to 6.5 h. Mean TST indicated that adolescent participants received an average of 6.3 h sleep on the night before entering the lab and adult participants received an average of 5.8 h. There were no significant differences between groups (see Table 1).
A repeated-measures ANOVA was conducted on the overnight sleepiness ratings with Group (adolescent, adult) as the between-subjects factor and Time (22:30, 06:30) as the within-subject factor. There was no main effect of Group, F1,54 < 1, ns; but there was a main effect of Time, F1,54 = 68.94, P < 0.001, with participants reporting more sleepiness at 06:30 following sleep deprivation than at 22:30. The Group × Time interaction was not significant, F1,54 < 1, ns. It is important to note that even though participants entered the lab with a small amount of sleep deprivation (due to sleep being restricted the night before), all participants reported high levels of alertness and very little sleepiness upon entering the lab at 22:30 (see Table 1).
Computerized Text Analysis of Emotion Content in Vocal Expression
Table 2 presents the mean values for the analyses on positive and negative emotion using the computerized text analysis measure and the human rater judgments measure.
Table 2.
Mean values for positive and negative emotions in adolescents and adults
| Adolescents |
Adults |
|||
|---|---|---|---|---|
| 22:30 | 06:30 | 22:30 | 06:30 | |
| Computerized Text Analysis | ||||
| % Positive Emotions | 2.93 (1.77) | 1.47 (1.29) | 3.0 (1.59) | 2.96 (1.43) |
| % Negative Emotions | 1.10 (1.06) | 1.14 (1.50) | 0.92 (0.14) | 1.54 (1.51) |
| Human Rater Judgments | ||||
| Positive Emotions | 1.46 (0.64) | 1.14 (0.35) | 1.73 (0.76) | 1.24 (0.32) |
| Negative Emotions | 1.02 (0.34) | 1.26 (0.43) | 1.10 (0.28) | 1.19 (0.26) |
Mean values are presented with standard deviations in parentheses.
We first examined overall word count. There was a main effect of Time on overall word count (F1,54 = 8.91, P < 0.01), such that both the adolescent and adult groups used fewer words at 06:30 (adolescent M = 156, SD = 123 words; adult M = 228, SD = 131 words) relative to 22:30 (adolescent M = 200, SD = 168 words; adult M = 269, SD = 150 words). There were no significant Group effects, nor was an interaction present. Word count data revealed that the groups did not differ solely on the basis of the number of words generated during the 2 Speak Freely Interviews.
We conducted 2 separate repeated-measures ANOVAs for the LIWC categories, Positive Emotions and Negative Emotions. For Positive Emotions, there was a significant Group × Time interaction, F1,54 = 5.36, P < 0.05 (see Figure 1). Follow-up tests indicated no group differences at 22:30 (t54 < 1,ns), but at 06:30 the adolescents expressed fewer positive emotion words than the adults (t54 = 3.83, P < 0.001). Furthermore, the adolescent group used fewer positive emotion words at 06:30 compared to 22:30 (t37 = 4.19, P < 0.001), while adults displayed no change (t16 < 1,ns).
Figure 1.
Percentage of “Positive Emotion” words expressed at 22:30 compared to 06:30 in adolescents relative to adults. Error bars, mean (SEM).
For Negative Emotions, there was no main effect of Group (F1,54 < 1, ns) or Time (F1,54 = 1.84, ns), and no Group × Time interaction (F1,54 = 1.48, ns). There were no significant differences between males and females for positive emotions or negative emotions.
Human Rater Judgments of Emotion in Vocal Expression
We conducted 2 separate repeated-measures ANOVAs for the positive emotions composite and the negative emotions composite. For the positive emotions composite, there was a significant main effect of Time (F1,52 = 22.9, P < 0.001), such that all participants displayed less positive emotion at 06:30 than at 22:30. There was no main effect of Group (F1,52 = 1.75,ns) or Group × Time interaction (F1,52 < 1,ns).
For the negative emotion composite, there was a significant main effect of Time (F1,52 = 10.84, P < 0.01), such that all participants displayed more negative emotion at 06:30 than at 22:30. There was no main effect of Group (F1,52 < 1,ns) and no Group × Time interaction (F1,52 = 2.16,ns). There were no significant differences between males and females for the positive or negative emotion composites.
Computerized Acoustic Properties of Vocal Expression
Table 3 presents the mean values for each of the acoustic properties from 22:30 to 06:30 for the adolescent and adult participants. We conducted repeated measures ANOVAs for the 30 acoustic properties. For fundamental frequency (F0), there was a significant main effect of Time for F0 average (F1,53 = 8.14, P < 0.01), such that all participants expressed a decreased rate in F0 at 06:30 relative to 22:30. There was no main effect of Group (F1,53 < 1, ns) or Group × Time interaction (F1,53 < 1, ns) for F0 average. Additionally, there were no main effects of Group or Time and no Group × Time interactions for the standard deviation, minimum, maximum, and range of F0 (see Table 3).
Table 3.
Mean values for acoustic properties in adolescents and adults
| Adolescents |
Adults |
|||
|---|---|---|---|---|
| 22:30 | 06:30 | 22:30 | 06:30 | |
| F0_avg (Hz) | 148.72 (27.86) | 139.20 (20.73) | 144.46 (30.89) | 134.19 (22.03) |
| F0_std (Hz) | 51.08 (15.89) | 43.72 (17.40) | 51.10 (15.61) | 52.48 (26.67) |
| F0_min (Hz) | 98.18 (11.37) | 95.16 (5.75) | 93.62 (10.32) | 90.86 (8.84) |
| F0_max (Hz) | 237.11 (47.3) | 213.86 (48.26) | 229.22 (61.75) | 231.48 (98.86) |
| F0_range (Hz) | 138.93 (45.49) | 118.70 (48.55) | 135.60 (54.53) | 140.61 (101.37) |
| F0_jitter_PF (Hz) | 0.34 (0.07) | 0.36 (0.07) | 0.32 (0.07) | 0.33 (0.06) |
| F0_jitter_PQ_mean (Hz) | 0.20 (0.04) | 0.21 (0.04) | 0.19 (0.04) | 0.20 (0.03) |
| Energy_avg (dB) | 0.21 (0.01) | 0.22 (0.01) | 0.21 (0.01) | 0.21 (0.01) |
| Energy_std (dB) | 0.13 (0.02) | 0.12 (0.02) | 0.13 (0.02) | 0.13 (0.02) |
| Energy_min (dB) | 0.09 (0.02) | 0.09 (0.02) | 0.09 (0.02) | 0.09 (0.02) |
| Energy_max (dB) | 0.45 (0.04) | 0.44 (0.04) | 0.43 (0.03) | 0.45 (0.03) |
| Energy_range (dB) | 0.36 (0.05) | 0.35 (0.05) | 0.35 (0.05) | 0.36 (0.04) |
| Energy_bark7 at 700-840 Hz (dB) | 3.46*103 (1.09*103) | 2.60*103 (1.59*103) | 3.70*103 (1.57*103) | 3.48*103 (1.97*103) |
| Energy_bark8 at 840-1000 Hz (dB) | 2.66*103 (1.37*103) | 1.83*103 (1.75*103) | 3.20*103 (1.59*103) | 2.85*103 (2.55*103) |
| Energy_bark9 at 1000-1170 Hz (dB) | 2.36*103 (1.53*103) | 1.54*103 (1.52*103) | 2.66*103 (1.46*103) | 2.37*103 (1.90*103) |
| Energy_bark10 at 1170-1370 Hz (dB) | 1.87*103 (1.14*103) | 1.16*103 (0.98*103) | 1.98*103 (1.16*103) | 1.76*103 (1.41*103) |
| Energy_bark11 at 1370-1600 Hz (dB) | 1.54*103 (0.92*103) | 0.92*103 (0.68*103) | 1.59*103 (0.81*103) | 1.36*103 (1.05*103) |
| Energy_bark12 at 1600-1850 Hz (dB) | 1.28*103 (0.77*103) | 0.75*103 (0.59*103) | 1.26*103 (0.70*103) | 0.96*103 (0.67*103) |
| Energy_bark13 at 1850-2150 Hz (dB) | 1.18*103 (0.83*103) | 0.64*103 (0.61*103) | 0.87*103 (0.57*103) | 0.63*103 (0.43*103) |
| Energy_bark14 at 2150-2500 Hz (dB) | 0.85*103 (0.68*103) | 0.45*103 (0.56*103) | 0.51*103 (0.38*103) | 0.38*103 (0.31*103) |
| Energy_bark15 at 2500-2900 Hz (dB) | 0.46*103 (0.40*103) | 0.25*103 (0.32*103) | 0.32*103 (0.26*103) | 0.27*103 (0.29*103) |
| Energy_bark16 at 2900-3400 Hz (dB) | 0.40*103 (0.46*103) | 0.22*103 (0.29*103) | 0.30*103 (0.25*103) | 0.26*103 (0.25*103) |
| Loud_shimmer_PF | 0.73 (0.16) | 0.79 (0.16) | 0.72 (0.14) | 0.79 (0.12) |
| Loud_shimmer_PQ_mean | 0.27 (0.05) | 0.29 (0.04) | 0.26 (0.04) | 0.28 (0.04) |
| ratio_voiced_over_unvoice | 2.90*103 (2.08*103) | 3.0*103 (1.87*103) | 2.90*103 (1.88*103) | 3.0*103 (1.94*103) |
| silence_voiced_count | 9.60*103 (1.57*103) | 8.40*103 (2.45*103) | 9.81*103 (1.65*103) | 8.63*103 (1.35*103) |
| silence_duration (seconds) | 0.70 (0.08) | 0.70 (0.14) | 0.71 (0.06) | 0.77 (0.05) |
| HF500 (dB) | 8.31 (8.34) | 5.97 (5.78) | 6.99 (4.36) | 6.41 (6.75) |
| HF1000 (dB) | 0.67 (0.46) | 0.48 (0.28) | 0.66 (0.32) | 0.51 (0.21) |
| Slope1000 | -0.82 (0.74) | -0.39 (0.46) | -0.91 (0.74) | -0.58 (0.51) |
Mean values are presented with standard deviations in parentheses.
There was a significant main effect of Time for both of the methods applied to calculate jitter. For the average of the first-order difference sequence in F0, there was a change in jitter (F1,53 = 4.12, P < 0.05), such that all participants expressed an increase in jitter at 06:30 relative to 22:30. There was no main effect of Group (F1,53 = 1.65, ns) or Group × Time interaction (F1,53 < 1, ns). Additionally, there was a main effect of Time for the average of the difference sequence over the mean of running F0 values with different cycle lengths (F1,53 = 5.23, P < 0.05), such that all participants expressed an increase in jitter at 06:30 relative to 22:30. There was no main effect of Group (F1,53 < 1, ns) or Group × Time interaction (F1,53 < 1, ns).
For intensity, there were no main effects of Group or Time, and no Group × Time interactions for the average, standard deviation, minimum, maximum, and range of energy (see Table 3). However, when intensity was measured in specific high frequency energy bands, there were significant main effects of Time in the following bark scales, such that all participants expressed decreases in psycho-acoustical barks at 06:30 relative to 22:30: bark7 at 700-840 Hz (F1,53 = 4.31, P < 0.05); bark8 at 840-1000 Hz (F1,53 = 5.33, P < 0.05); bark9 at 1000-1170 Hz (F1,53 = 9.89, P < 0.01); bark10 at 1170-1370 Hz (F1,53 = 11.62, P = 0.001); bark11 at 1370-1600 Hz (F1,53 = 12.97, P = 0.001); bark12 at 1600-1850 Hz (F1,53 = 16.81, P < 0.001); bark13 at 1850-2150 Hz (F1,53 = 13.40, P = 0.001); bark14 at 2150-2500 Hz (F1,53 = 8.15, P < 0.01); and bark15 at 2500-2900 Hz (F1,53 = 4.22, P < 0.05). There were no main effects of Group or any Group × Time interactions for these bark scales (see Table 3). Additionally, there were no significant main effects of Time or Group and no Group × Time interactions for bark16 at 2900-3400 Hz (see Table 3). Note the magnitude of energy appearing in frequency bands (energy_bark7-16) is much smaller than the magnitude of the overall energy (energy_avg), given that each bark value is a decomposition of the total energy.
There was a significant main effect of Time for both of the methods applied to calculate shimmer. For the average of the first-order difference sequence in energy, there was a change in shimmer (F1,53 = 11.83, P = 0.001), such that all participants expressed an increase in shimmer at 06:30 relative to 22:30. There was no main effect of Group (F1,53 < 1, ns) or Group × Time interaction (F1,53 < 1, ns). Additionally, there was a main effect of Time for the average of the difference sequence over the mean of running energy values with different cycle lengths (F1,53 = 9.97, P < 0.01), such that all participants expressed an increase in shimmer at 06:30 relative to 22:30. There was no main effect of Group (F1,53 < 1, ns) or Group × Time interaction, (F1,53 < 1, ns).
For the temporal aspects of speech, there were no main effects of Group or Time and no Group × Time interactions for speech rate (see Table 3). However, there was a main effect of Time for Pauses (F1,53 = 12.58, P = 0.001), such that all participants expressed a decrease in pauses at 06:30 relative to 22:30. There was no main effect of Group (F1,53 < 1, ns) or Group × Time interaction, (F1,53 < 1, ns). Additionally, there were no main effects of Group or Time and no Group × Time interactions for the total duration of silence (see Table 3).
For high frequency energy in the spectrogram above 500 Hz and 1000 Hz, there were no main effects of Group or Time and no Group × Time interaction for 500 Hz (see Table 3), but there was a main effect of Time for 1000 Hz (F1,53 = 7.91, P < 0.01), such that all participants expressed a decrease in high frequency energy above 1000 Hz at 06:30 relative to 22:30. There was no main effect of Group (F1,53 < 1, ns) or Group × Time interaction (F1,53 < 1, ns). In addition, the spectral slope over 1000 Hz became flatter in all participants at 06:30 relative to 22:30 (F1,53 = 26.56, P < 0.001). There was no main effect of Group (F1,53 < 1, ns) or Group × Time interaction (F1,53 < 1, ns).
DISCUSSION
The overall aim of this study was to investigate the impact of sleep deprivation on positive and negative emotions via a multimethod approach to vocal expression in adolescents relative to adults. The first prediction was that all participants would exhibit a pattern of decreased positive emotion and increased negative emotion in vocal expression at 06:30 compared to 22:30. In partial support, all participants expressed a decrease in positive emotion words at 06:30 relative to 22:30 on the computerized text analysis measure. However, this measure did not show an increase in negative emotion words. Also consistent with the hypothesis were a decrease in positive emotion expression and an increase in negative emotion expression based on the human rater judgments. The computerized acoustic properties analysis added to the support for the hypothesis with the most dramatic changes observed as a result of sleep deprivation in pitch, energy, and vocal sharpness. In other words, vocal expression took on a deeper pitch, became less intense, and energy levels decreased. Previous research has described decreases in pitch as being associated with sadness.11 Additionally, high frequency energy has been associated with low physiological activation.17 Low activation appears to be associated with sadness and fatigue.17,37 Finally, increased perturbations in pitch and loudness of speech (jitter and shimmer) have been interpreted as indicative of stress or anxiety.18 We also note that there was a decrease in pauses at 06:30 relative to 22:30; however, there were no differences in the rate of speech or total duration of silence. Therefore, it is unlikely that the pitch, energy, and vocal sharpness findings can be explained by participants speaking more slowly due to fatigue. Overall, these results are consistent with previous studies indicating that adolescents and adults experience negative mood in relation to sleep deprivation.2,6 Hence it is possible that, in addition to the positive emotion decrease displayed in the computerized text analysis and human rater results, low activation negative emotions (such as sadness) may have increased. Taken together, these results raise the importance of using a multimethod approach and are consistent with previous research indicating that positive emotion decreases and negative emotion increases in sleep deprived participants, relative to rested participants.3,4 The fact that no change was observed in negative emotions using the computerized text analysis raises the possibility that the context and acoustic nuances of vocal expression are critical in the measurement of emotion, and that words alone cannot capture the full range of emotional experience.11
The second hypothesis was that the predicted effects of sleep deprivation would be particularly pronounced in the adolescent group relative to the adult group. This hypothesis was partially supported. Based on the computerized text analysis, the adolescent group expressed fewer positive emotion words than the adult group when sleep deprived. Additionally, the adolescent group displayed a reduction in positive emotion words from 22:30 to 06:30, while the adults displayed no change. It is important to note that the groups did not differ on overall word count. Hence, it seems unlikely that these results can be explained by the adolescent participants generating fewer words.
Several caveats are important to consider. First, the relatively small sample size, particularly for the adult group, may have limited statistical power. Additionally, the small sample size of the adolescent group precluded analysis of pubertal status on the expression of emotion, and there is evidence to suggest that the voice is going through changes during puberty, particularly among adolescent boys.38 However, we believe that a strength of the current study was the within-subjects design, which allowed comparison of vocal expression at two different time points within the same participant. Second, there is a possibility that the human raters may not have been blind to condition, given that the content discussed by the participants may have revealed aspects of the aims. However, in order to reduce bias, raters were not given information about the specific aims of the study and were simply told that they were rating interviews conducted at two different times. In addition, recall that interrater reliability for the human rater judgments of negative emotions was low. As such, these results should be interpreted with caution. However, we note that accuracy at detecting emotion via vocal expression varies widely, and our current correlations are well within the accuracy ranges reported for this type of coding.13 Third, future work should use a high quality external microphone (i.e., VoiceTracker array microphone by AcousticMagic) in a sound attenuated room. It is possible that the built-in microphone used in the current study may lose some nuanced details. However, in the current investigation, recording conditions were kept consistent across all participants and the within-subject design allowed for the analysis of a change in acoustic properties.
Aside from these specific concerns about the results produced by the measurements of vocal expression, there might also be a more general concern about how well the methods used in the current study can actually measure emotion. Indeed other speech researchers have wondered if changes in the voice are indicative of changes in physiological arousal rather than changes in valence.13 However, the multimethod approach to the current study allowed for one method to compliment the shortcomings of another. For example, the changes in high frequency energy alone might have been interpreted as changes in physiological arousal were it not for the human rater judgments and computerized text analysis to supplement with results about positive and negative emotions. On the other hand, human raters are not able to hear specific changes in the high frequency energy bands. This is consistent with previous research on vocal expression of emotion, which suggests that one method can enhance the results of another.13,19
Finally, the design did not control for circadian influences, which some evidence suggests may be important to consider in studies of adolescents and emotion.39,40 Indeed, many adolescent researchers have argued that differences in the circadian system during adolescence compared to adulthood should compel public policy makers to delay school start times.41 If the current results are primarily due to circadian differences between adolescents and adults, then the public health implications remain significant, especially given that adolescents are often required to wake before 06:30 during the school year. We also note that even if the adolescents and adults in this study differed in their circadian timing, their subjective reports of sleepiness at 06:30 did not significantly differ. In addition, light exposure can induce circadian phase shifts,42 and artificial light exposure was not strictly controlled in this protocol. Hence, future research in this area is needed to disentangle the effects of sleep deprivation versus circadian and light exposure influences on expression of emotion.
In sum, supporting previous research suggesting a critical role for sleep in healthy emotional functioning,1–4 the findings of the current study indicate that positive emotion decreases and negative emotion increases in response to sleep deprivation, and that adolescents are differentially vulnerable to these effects. These findings have important implications given the prevalence of sleep deprivation in our 24/7 society, especially among adolescents.6 Furthermore, these findings underscore the importance of using a multimethod approach when evaluating the impact of sleep (or lack of sleep) on emotions.
ACKNOWLEDGMENTS
This project was supported by National Institute of Mental Health Grant R24 MH067346 awarded to RED and by National Institute of Child Health and Human Development Grant F31 HD058411 awarded to ELM. This work was performed at the University of California, Berkeley.
DISCLOSURE STATEMENT
This was not an industry supported study. Dr. Chang has received research support from VG-Bioinformatics. The other authors have indicated no financial conflicts of interest.
REFERENCES
- 1.Pilcher JJ, Huffcutt AJ. Effects of sleep deprivation on performance: A meta-analysis. Sleep. 1996;19:318–26. doi: 10.1093/sleep/19.4.318. [DOI] [PubMed] [Google Scholar]
- 2.Dinges DF, Pack F, Williams K, et al. Cumulative sleepiness, mood disturbance, and psychomotor vigilance performance decrements during a week of sleep restricted to 4-5 hours per night. Sleep. 1997;20:267–77. [PubMed] [Google Scholar]
- 3.Franzen PL, Siegle GJ, Buysse, DJ Relationships between affect, vigilance, and sleepiness following sleep deprivation. J Sleep Res. 2008;17:34–41. doi: 10.1111/j.1365-2869.2008.00635.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Zohar D, Tzischinsky O, Epstein R, Lavie P. The effects of sleep loss on medical residents’ emotional reactions to work events: a cognitive-energy model. Sleep. 2005;28:47–54. doi: 10.1093/sleep/28.1.47. [DOI] [PubMed] [Google Scholar]
- 5.Wolfson AR, Carskadon MA, Mindell JA, Drake C. The National Sleep Foundation: Sleep in America poll. 2006. [[Accessed June 10, 2009]]. Available from: http://www.sleepfoundation.org/sites/default/files/2006_summary_of_findings.pdf. [DOI] [PubMed]
- 6.Wolfson AR, Carskadon MA. Sleep schedules and daytime functioning in adolescents. Child Dev. 1998;69:875–87. [PubMed] [Google Scholar]
- 7.Oginska H, Pokorski J. Fatigue and mood correlates of sleep length in three age-social groups: School children, students, and employees. Chronobiol Int. 2006;23:1317–28. doi: 10.1080/07420520601089349. [DOI] [PubMed] [Google Scholar]
- 8.Spear LP. Neurobehavioral changes in adolescence. Curr Dir Psychol Sci. 2000;9:111–4. [Google Scholar]
- 9.Resnick MD, Bearman PS, Blum RW, et al. Protecting adolescents from harm: Findings from the National Longitudinal Study on Adolescent Health. JAMA. 1997;278:823–32. doi: 10.1001/jama.278.10.823. [DOI] [PubMed] [Google Scholar]
- 10.Darwin C. New York: Oxford University: 1998. The expression of the emotions in man and animals. (Original work published 1872) [Google Scholar]
- 11.Juslin PN, Scherer KR. Vocal expression of affect. In: Harrigan JA, Rosenthal R, Scherer KR, editors. The new handbook of methods in nonverbal behavior research. New York: Oxford University: 2005. pp. 65–135. Press. [Google Scholar]
- 12.Gottschalk LA, Gleser GC. Oxford, England: University of California Press; 1969. The measurement of psychological states through the content analysis of verbal behavior. [Google Scholar]
- 13.Banse R, Scherer KR. Acoustic profiles in vocal emotion expression. Journal of Personality and Social Psychology. 1996;70:614–36. doi: 10.1037//0022-3514.70.3.614. [DOI] [PubMed] [Google Scholar]
- 14.Pennebaker JW, Francis ME, Booth RJ. LIWC. Mahwah, NJ: Lawrence Erlbaum.; 2001. Linguistic Inquiry and Word Count. [Google Scholar]
- 15.Pennebaker, JW, Mehl, MR, Niederhoffer, KG Psychological aspects of natural language use: Our words, our selves. Annu Rev Psychol. 2003;54:547–77. doi: 10.1146/annurev.psych.54.101601.145041. [DOI] [PubMed] [Google Scholar]
- 16.Douglas-Cowie E, Campbell N, Cowie R, Roach P. Emotional speech: Towards a new generation of databases. Speech Commun. 2003;40:33–60. [Google Scholar]
- 17.Laukka P, Juslin PN, Bresin R. A dimensional approach to vocal expression of emotion. Cogn Emot. 2005;19:633–53. [Google Scholar]
- 18.Fuller BF, Horii Y, Conner DA. Validity and reliability of nonverbal voice measures as indicators of stressor-provoked anxiety. Res Nurs Health. 1992;15:379–89. doi: 10.1002/nur.4770150507. [DOI] [PubMed] [Google Scholar]
- 19.Monnot M, Orbelo D, Riccardo L, et al. Acoustic analyses support subjective judgments of vocal emotion. Ann N Y Acad Sci. 2004;1000:288–92. doi: 10.1196/annals.1280.027. [DOI] [PubMed] [Google Scholar]
- 20.Giedd JN. Structural magnetic resonance imaging of the adolescent brain. Ann N Y Acad Sci. 2004;1021:77–85. doi: 10.1196/annals.1308.009. [DOI] [PubMed] [Google Scholar]
- 21.Richardson GS, Malin HV. Circadian rhythm sleep disorders: Pathophysiology and treatment. J Clin Neurophysiol. 1996;13:17–31. doi: 10.1097/00004691-199601000-00003. [DOI] [PubMed] [Google Scholar]
- 22.Campbell SS, Murphy PJ. The nature of spontaneous sleep across adulthood. J Sleep Res. 2007;16:24–32. doi: 10.1111/j.1365-2869.2007.00567.x. [DOI] [PubMed] [Google Scholar]
- 23.Edinger JD, Bonnet MH, Bootzin RR, et al. Derivation of research diagnostic criteria for insomnia: Report of an American Academy of Sleep Medicine work group. Sleep. 2004;27:1567–96. doi: 10.1093/sleep/27.8.1567. [DOI] [PubMed] [Google Scholar]
- 24.American Psychiatric Association. American Psychiatric Association, Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR) Washington, DC: American Psychiatric Association; 2000. [Google Scholar]
- 25.First MB, Spitzer RL, Gibbon M, Williams J. New York: Biometrics Research, New York State Psychiatric Institute; 2007. Structured Clinical Interview for DSM-IV-TR Axis I Disorders-Patient Edition (SCID-I/P, 1/2007 revision) [Google Scholar]
- 26.Kaufman J, Birmaher B, Brent D, et al. Schedule for Affective Disorders and Schizophrenia for School-Age Children-Present and Lifetime Version (K-SADS-PL): Initial reliability and validity data. J Am Acad Child and Adolesc Psychiatry. 1997;36:980–8. doi: 10.1097/00004583-199707000-00021. [DOI] [PubMed] [Google Scholar]
- 27.Carskadon MA, Acebo C. A self-administered rating scale for pubertal development. J Adolesc Health. 1993;14:190–5. doi: 10.1016/1054-139x(93)90004-9. [DOI] [PubMed] [Google Scholar]
- 28.Hoddes E, Zarcone V, Smythe H, et al. Quantification of sleepiness: a new approach. Psychophysiology. 1973;10:431–6. doi: 10.1111/j.1469-8986.1973.tb00801.x. [DOI] [PubMed] [Google Scholar]
- 29.Halford WK, Keefer E, Osgarby SM. “How has the week been for you two?” Relationship satisfaction and hindsight memory biases in couples’ reports of relationship events. CognTher Res. 2002;26:759–73. [Google Scholar]
- 30.Harvey AG, Stinson K, Whitaker KL, et al. The subjective meaning of sleep quality: A comparison of individuals with and without insomnia. Sleep. 2008;31:383–93. doi: 10.1093/sleep/31.3.383. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Laurent J, Catanzaro S, Joiner T, et al. A measure of positive and negative affect for children: Scale development and initial validation. Psychol Assess. 1999;11:326–38. [Google Scholar]
- 32.Shrout PE, Fleiss JL. Intraclass correlations: Uses in assessing rater reliability. Psychol Bull. 1979;86:420–8. doi: 10.1037//0033-2909.86.2.420. [DOI] [PubMed] [Google Scholar]
- 33.Bachorowski J, Owren MJ. Vocal expressions of emotion. In: Lewis M, Haviland-Jones JM, Barrett LF, editors. Handbook of emotions. New York: Guilford Press; 2008. pp. 196–210. [Google Scholar]
- 34.Fernandez R, Picard R. Classical and novel discriminant features for affect recognition from speech. Interspeech - Eurospeech Conference on Speech Communication and Technology.2005. [Google Scholar]
- 35.Moore E, Clements M, Peifer J, Weisser L. Analysis of prosodic variation in speech for clinical depression. Proceedings of the 25th Annual Conference on Engineering in Medicine and Biology; 2003. pp. 2925–8. [DOI] [PubMed] [Google Scholar]
- 36.Boersma P, Weenink D. Praat: Doing phonetics by computer (Version 5.1.05) [Computer program] 2009. Retrieved May 1, 2009. from http://www.praat.org/
- 37.Russell JA. The circumplex model of affect. J Pers Soc Psychol. 1980;39:1161–78. [Google Scholar]
- 38.Fuchs M, Frolehlich M, Hentschel B, et al. Predicting mutational change in the speaking voice of boys. J Voice. 2007;21:169–78. doi: 10.1016/j.jvoice.2005.10.008. [DOI] [PubMed] [Google Scholar]
- 39.Crowley SJ, Acebo C, Carskadon MA. Sleep, circadian rhythms, and delayed phase in adolescence. Sleep Med. 2007;8:602–12. doi: 10.1016/j.sleep.2006.12.002. [DOI] [PubMed] [Google Scholar]
- 40.Hasler BP, Mehl MR, Bootzin RR, Vazire, S Preliminary evidence of diurnal rhythms in everyday behaviors associated with positive affect. J Res Pers. 2008;42:1537–46. [Google Scholar]
- 41.Wolfson AR, Carskadon MC. Understanding adolescents’ sleep patterns and school performance: a critical appraisal. Sleep Med Rev. 2003;7:491–506. doi: 10.1016/s1087-0792(03)90003-7. [DOI] [PubMed] [Google Scholar]
- 42.Shanahan TL, Czeisler CA. Light exposure induces equivalent phase shifts of the endogenous circadian rhythms of circulating plasma melatonin and core body temperature in men. J Clin Endocrinol Metab. 1991;73:227–35. doi: 10.1210/jcem-73-2-227. [DOI] [PubMed] [Google Scholar]

