Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2007 Mar 5.
Published in final edited form as: Behav Res Methods. 2005 Nov;37(4):626–630. doi: 10.3758/bf03192732

Emotional category data on images from the International Affective Picture System

JOSEPH A MIKELS 1,2, BARBARA L FREDRICKSON 3, GREGORY R LARKIN 4, CASEY M LINDBERG 4, SAM J MAGLIO 4, PATRICIA A REUTER-LORENZ 5
PMCID: PMC1808555  NIHMSID: NIHMS13830  PMID: 16629294

Abstract

The International Affective Picture System (IAPS) is widely used in studies of emotion and has been characterized primarily along the dimensions of valence, arousal, and dominance. Even though research has shown that the IAPS is useful in the study of discrete emotions, the categorical structure of the IAPS has not been characterized thoroughly. The purpose of the present project was to collect descriptive emotional category data on subsets of the IAPS in an effort to identify images that elicit one discrete emotion more than others. These data reveal multiple emotional categories for the images and indicate that this image set has great potential in the investigation of discrete emotions. This article makes these data available to researchers with such interests.

Psychological researchers use many diverse methods to investigate emotion in the laboratory. These procedures range from imagery inductions to film clips and static pictures. One of the most widely used stimulus sets is the International Affective Picture System (IAPS; Lang, Bradley, & Cuthbert, 1999), a set of static images based on a dimensional model of emotion. The image set contains various pictures depicting mutilations, snakes, insects, attack scenes, accidents, contamination, illness, loss, pollution, puppies, babies, and landscape scenes, among others. The goal of this article is to offer a more complete characterization of the categorical structure of this stimulus set, with the objective of identifying images that elicit one discrete emotion more than other emotions.

The dimensional approach has provided many insights into affective experience. Although the identity and number of dimensions for parsing emotional space have been debated, empirical investigations have generally shown that models including only two dimensions, valence and arousal, are superior to models including more than these two dimensions (Mehrabian & Russell, 1974; Smith & Ellsworth, 1985; Yik, Russell, & Barrett, 1999). The IAPS has been used to provide abundant insight into the dimensional aspects of emotion. For instance, heart rate and facial electromyographic activity differentiate negative from positive valence, whereas skin conductance increases with increased arousal (Bradley & Lang, 2000; Lang, Bradley, & Cuthbert, 1998). Furthermore, studies in which the IAPS has been used have indicated that emotional stimuli undergo more extensive processing in the visual cortex than do neutral stimuli (Lang et al., 1998). Finally, it has been shown that the startle reflex can be inhibited by the viewing of positive pictures and accentuated by the viewing of arousing negative pictures, an observation that reveals the differential role of positive versus negative affect in the modulation of attention and orienting (Cuthbert, Bradley, & Lang, 1996). Alternatively, other emotion theorists subscribe to a discrete categorical model of emotion in which emotions are distinguished from one another by specific attributes, such as cognitive appraisals, bodily responses, or action tendencies (Ekman, 1992; Izard, 1977; Lazarus, 1991; Tomkins, 1962).

Discrete categorical models have also provided numerous empirical insights. For instance, cross-cultural studies of the ability to decode emotions from facial cues have suggested a set of universal basic emotions, including anger, disgust, fear, sadness, and enjoyment (Ekman, 1993). In addition, heart rate and finger temperature have been shown to differentiate anger, fear, disgust, and sadness in young and in elderly participants, as well as in different cultures (Levenson, 2003). Importantly, the IAPS has also shed light on discrete emotions, showing that different discrete emotions (disgust, sadness, fear, nurturance, and erotic happiness) have different valence and arousal ratings, and that they can be distinguished by facial electromyographic, heart rate, and electrodermal measures (Bradley, Codispoti, Sabatinelli, & Lang, 2001; Lang, Greenwald, Bradley, & Hamm, 1993).

Although the dimensional and discrete views of emotion appear to be at odds with one another, recent work supports an integrated model of emotion organized around two basic motivational systems: the appetitive and the defensive systems (Bradley, Codispoti, Cuthbert, & Lang, 2001; Cacioppo & Gardner, 1999; Davidson & Irwin, 1999; Lang et al., 1993). According to this view, the two dimensions described earlier, valence and arousal, capture global and basic elements of emotion; valence indicates which motivational system is activated, and arousal marks the intensity of this activation. Specific discrete emotions then constitute a subordinate division of this system that correspond to specific content categories, as has been shown with the IAPS (Bradley, Codispoti, Cuthbert, & Lang, 2001; Bradley, Codispoti, Sabatinelli, & Lang, 2001). Specifically with respect to defensive motivation, pictures that represent attack result in such labels as fear, sadness, and anger, but pictures of contamination elicit the label of disgust (Bradley, Codispoti, Sabatinelli, & Lang, 2001). Whereas images in both of these content categories are subjectively rated as negative, elicit a skin conductance response, and evoke startle reflexes, the attack images, relative to the contamination images, result in larger responses on all three of these measures (Bradley, Codispoti, Cuthbert, & Lang, 2001). Thus, the emotional categories capture the broader content areas, which in turn reflect the level of activation in this defensive motivational system.

So not only is there a precedent for using the IAPS for the study of discrete emotions, but also using it to study discrete emotions and the dimensions of emotion allows for better integration between these two perspectives. Despite the benefits of an integrated approach, extensive data on the categorical structure of the IAPS have until now been generally unavailable.1 The present project was conducted in order to provide discrete categorical data on large subsets of the IAPS images, and this article makes these data accessible. Emotional category ratings were collected, in two studies in which a subset of negative images (Study 1) and a subset of positive images (Study 2) were used, with a constrained set of categorical labels. In an effort to use a valid set of category labels, we conducted a pilot study in order to determine which emotional labels would be generated when participants were unconstrained.

PILOT STUDY

To validate the use of a constrained set of category labels in Studies 1 and 2, we had participants self-generate emotional labels for the emotions that images from the IAPS elicited in them. In this study, as well as in the following two studies, we used two subsets of images. For the negative subset, 203 images were selected as negatively valenced IAPS images meeting the minimum criterion that they be less than the neutral midpoint of 5 (mean pleasure rating = 3.05, SD = 0.84, range = 1.45–4.59; mean arousal rating = 5.56, SD = 0.92, range = 2.63–7.35).2; The images (pixel dimensions: 400 × 300) were presented in 32-bit color (resolution: 1,024 × 768 pixels). For the positive subset, 187 images were selected as positively valenced IAPS images by meeting the criterion of being equal to or greater than 5 (mean pleasure rating = 7.05, SD = 0.63, range = 5.00–8.34; mean arousal rating = 4.87, SD = 0.98, range = 2.90–7.35; see note 2). Erotic images were excluded, due to the complicated patterns of emotion elicited by them for males and females, which potentially may be contingent on sociocultural factors (Bradley, Codispoti, Sabatinelli, & Lang, 2001). To supplement this sample of positive images, we included 51 commercially produced landscape images from Corel Photo Libraries.

Twenty undergraduate participants (10 females and 10 males; mean age = 19.55 years) viewed each picture and then typed with the keyboard of the computer an emotional label that appeared beneath the image as they typed it. The participants were allowed to use as many emotional labels as they needed to describe their emotional reaction fully.

We calculated as a percent the frequency with which a given emotional label was used out of the total number of negative emotional labels produced. The top 10 negative emotional category labels generated by the participants were fear (22.83%), sadness (20.26%), disgust (19.86%), anger (8.97%), pity (4.30%), contempt (2.91%), scared (1.73%), shock (1.18%), concern (1.02%), and anxiety (0.84%). In addition, in this pilot study, for a given image, the participants used multiple labels for 21.35% of their responses. This result indicates that it is most appropriate to use a procedure that allows participants to provide multiple emotion labels for any one image. Furthermore, previous research in which discrete categories and the IAPS were examined has indicated that many of the images elicit multiple discrete emotions (Bradley, Codispoti, Sabatinelli, & Lang, 2001).

In a separate and counterbalanced block, the participants generated emotional labels for the positive emotion subset. As for the negative images, we calculated a percentage for the frequency with which a given emotional label was used from the total number of positive emotional labels generated. The top 10 positive emotional category labels generated by the participants were awe (24.30%), amusement (11.54%), happy (7.64%), excitement (7.20%), content (6.98%), interest (3.01%), desire (2.94%), curious (1.81%), peaceful (1.64%), and affection (1.32%).

STUDY 1

Method

Participants

Sixty participants3; with no psychiatric histories completed Study 1 (mean age = 18.7 years). The group consisted of 30 females and 30 males, who received course credit for participation.

Apparatus

Macintosh G3 computers with PsyScope software (J. D. Cohen, MacWhinney, Flatt, & Provost, 1993) were used for stimulus presentation and data acquisition.

Materials

The 203 negative images in the pilot study were also used in this study. The images were divided into two randomly ordered subsets. Subset order was counterbalanced across participants.

Design and Procedure

The participants were first screened to ensure their willingness to view negatively valenced graphic images by showing them four representative pictures not used in the experiment. Six individuals declined participation after screening. The participants were run in groups of 4–15. They viewed each image, during which four labels were presented one at a time beneath the picture in the following order: fear, disgust, sadness, and anger. Because 71.9% of the total negative emotional labels generated consisted of the top four labels of fear, sadness, disgust, and anger in the pilot study, these four category labels seemed adequate for capturing the single discrete emotions elicited by the IAPS. Furthermore, these four emotions are considered universal and basic negative emotions (Ekman, 1993) and have previously been shown to capture well the discrete emotions of the IAPS (Lang et al., 1993). Only negative emotional category labels were used, because images were chosen that were of negative valence according to the data in Lang et al. (1999). Our objective was to obtain single discrete emotional labels for as many images as possible, while minimizing the number of blended images. Nonetheless, some of the images (especially those near the midpoint on the valence scale) may be blends of negative and positive emotions. The present design does not allow for the examination of images with blended positive and negative valence. Thus, for the images near the neutral midpoint, our categorical data may not have captured potentially mixed positive and negative emotions. The participants responded on a 7-point scale, according to the degree to which they felt the specified emotion, by pressing a number (1–7) on the computer keyboard, with 1 indicating not at all and 7 indicating a great amount. This procedure allowed the participants to indicate multiple labels for a given image. The participants were led through a sample picture and then rated all the pictures on all four emotion labels at their own pace.

Results and Discussion

Means for the four ratings were calculated for each of the images individually. A 90% confidence interval (CI) was constructed around each mean, and category membership was determined according to the overlap of the intervals for each rating. For a given picture, if the mean for one emotion was higher than the means of all the other emotions and if the CI for that emotion did not overlap with the CIs for the other three emotional categories, it was classified within a single emotion category. If two or three means were higher than the rest and if the CIs of those means overlapped only with each other, the image was categorized as blended. If all four CIs overlapped, such an image was classified as undifferentiated. This procedure was used because our objective was to find images that elicited one discrete emotion more than it elicited the others. However, alternative analysis methods could be employed, depending on the specific interest of the researcher. For a discussion of other analysis methods, see the General Discussion section.

This parsing scheme yielded five categories for the 203 images used in Study 1: disgust (n = 31), fear (n = 12), sadness (n = 42), blended (n = 48), and undifferentiated (n = 70). Mean ratings and standard deviations for each picture on the four categories, as well as the categorical classification for each image, appear in the archived files. Some example images in each category include disgust (a burn victim; a dirty toilet bowl), fear (snakes; tornado), and sadness (a dying hospitalized woman; women crying).

To examine gender differences in the ratings, we computed repeated measure ANOVAs for each image, with the within-subject factor of emotional label and with gender as a between-subjects variable. This analysis revealed that 13.79% of the 203 negative images had different categorical ratings between genders. These images are marked with an asterisk in the archived files. Thus, males and females minimally differed in their categorical labeling of IAPS images, which is consistent with previous accounts of general agreement between males and females on such a task (Bradley, Codispoti, Sabatinelli, & Lang, 2001).

STUDY 2

Method

Participants

Sixty individuals (see note 3) with no psychiatric histories, who had not participated in Study 1, participated in Study 2 (mean age = 18.8 years), of whom 30 were female and 30 were male. They received course credit for participation.

Apparatus

The apparatus was the same as that in Study 1.

Materials

The 187 images in the positive emotion subset from the pilot study were used in this study. The pictures were divided into two randomly ordered subsets, and subset order was counterbalanced across participants.

Design and Procedure

The design and procedure for Study 2 were the same as those for Study 1, with two exceptions. First, the participants were not screened for their willingness to view the images, since there were no graphic images used in this study. Second, different category labels were used in Study 2: awe, excitement, contentment, and amusement. We chose not to use the label happy, given its many meanings. As has been described by Diener, Scollon, and Lucas (2003), happiness has a multiplicity of meanings (such as pleasure, life satisfaction, positive emotions, etc.) and is a complicated construct. In Diener et al.'s discussion of happiness, it was used somewhat synonymously with subjective well-being. Happiness is a broad emotional term with diverse meanings and is, thus, somewhat nonspecific with respect to positive emotions. Importantly, as was stated above, we were concerned with discovering single discrete emotions. If the pilot study data are interrogated for cases in which only a single emotion was generated, the percentages change. The ordering of the top five category labels changed to awe (26.41%), amusement (11.88%), content (7.56%), excitement (6.26%), and happy (6.01%). Thus, in the pilot study, happiness was generated more often as a blended emotion than as a single discrete emotion, relative to the other labels.

Results and Discussion

As in Study 1, means for the four ratings were calculated, 90% CIs were constructed around each mean, and category membership was determined by the overlap of the CIs. This parsing procedure resulted in six categories: amusement (n = 10), awe (n = 7), contentment (n = 15), excitement (n = 10), blended (n = 71), and undifferentiated (n = 74). Mean ratings and standard deviations for each picture on the four categories, as well as the category membership for each image, are presented in the archived files. Example single discrete emotional images include amusement (laughing monkeys; older women with many birds perched on them), awe (a desert scene; an astronaut in space), contentment (a mother and her child; an older couple), and excitement (skiing; whitewater rafting).

We also examined gender differences in Study 2 with the same ANOVA procedure as that used in Study 1. We found that categorical ratings on 5.46% of the 238 images differed between genders, and these images are marked with an asterisk in the archived files. This result underscores the consistency between males and females in their categorical labeling of IAPS images, discussed above (Bradley, Codispoti, Sabatinelli, & Lang, 2001).

GENERAL DISCUSSION

The studies in this report provide categorical data that will allow the IAPS to be used more generally in the study of emotion from a discrete categorical perspective. In accord with previous reports (Bradley, Codispoti, Sabatinelli, & Lang, 2001), gender differences in the emotional categorization of the IAPS images were minimal. These data show that there are numerous images that elicit single discrete emotions and, further, that overall, a majority of the images elicit either single discrete emotions or emotions that represent a blend of discrete emotions, also in accord with previous reports (Bradley, Codispoti, Sabatinelli, & Lang, 2001).

Importantly, these data will allow researchers to examine the effects of discrete single emotions versus blended emotions on behavior. For instance, different emotions have been shown to have different cognitive and behavioral consequences. Sadness and anger have different effects on judgments of the likelihood of situational-caused versus human-caused events (Keltner, Ellsworth, & Edwards, 1993); fear and anger have different effects on whether a future event is perceived pessimistically or optimistically (Lerner & Keltner, 2000), and fear leads to risk aversion, whereas anger leads to risk seeking (Lerner & Keltner, 2001). Thus, for studies of emotion–cognition–behavior interactions, given the differential effects of discrete emotions on cognition and behavior, it may be that blended emotions have significantly different effects, relative to single discrete emotions. The present data set will allow researchers to examine such differences.

Although these conclusions hold given our method of analyzing these data, alternative methods could be used to interrogate these data. For instance, our method was devised to classify images that were rated with one discrete emotion label as significantly higher than the other labels, even though the intensity of this single rating may be lower than that for other images that elicit blended or undifferentiated emotions. Thus, if a researcher is interested in capturing the intensity of a given discrete emotional label, rather than its more absolute categorical membership, perhaps a different method should be used, such as collapsing across all of the images and determining categorical membership of a given picture by its rating relative to the other images (for instance, as one standard deviation above the overall mean). The data for each image across all ratings are available, so that researchers are able to use alternative methods if they so desire.

Interestingly, there appear to be fewer instances of single positive discrete emotions elicited by this set of images. Theoretically, positive emotions may be fewer in number, more diffuse, and more subtle (Fredrickson, 2001; Rozin & Royzman, 2001). Alternatively, this finding may be a result of our exclusion of erotic images or our exclusion of happy as a label. Nonetheless, the majority of positive emotion images still elicit distinguishable discrete emotions, although many of these are blended. Equally interesting, anger by itself is not elicited by these images. This outcome is commensurate with methodological difficulties associated with anger induction generally (Gerrards-Hesse, Spies, & Hesse, 1994; Gross & Levenson, 1995). Anger is contingent upon appraisals of extreme unpleasantness, high effort, high certainty, and strong human agency (Smith & Ellsworth, 1985, 1987) or, more simply, is appraised as involving a demeaning offense against the self (Lazarus, 1991). Such prerequisite conditions are difficult to achieve with the passive and essentially effortless viewing of static images. In contrast, such appraisal dimensions can be fulfilled for anger with a procedure such as the Velten or relived memories, which require focused attention and a relatively higher level of effort (Engebretson, Sirota, Niaura, Edwards, & Brown, 1999).

Many diverse emotion induction techniques exist, and the effectiveness of a given technique is better or worse, depending on the specific emotion. The IAPS has previously been used primarily in the study of dimensional aspects of emotions. This article makes available new categorical data for the IAPS that have the potential to increase the utility of this image set.

Footnotes

1

Note, however, that Davis et al. (1995) collected categorical data on a small subset of the images (114 images from the current set of over 700).

2

Mean ratings calculated from the data in Lang et al. (1999). Pleasure ratings are based on a 9-point scale, with 5 constituting neutral, 1 the extreme negative, and 9 the extreme positive; arousal ratings are also based on a 9-point scale ranging from low (1) to high (9) arousal. Given the concern that some of the images near the neutral midpoint in the negative and positive subsets may be blends of negative and positive emotions, we determined that using a criterion of two standard deviations from the subset means would indicate with reasonable confidence that images within such a range are from a separate negative or positive population of images. With this criterion, three positive images fall outside the range (Images 7640, 8160, and 8232) and should thus be used with caution. These images are identified in the archived files.

3

On the basis of Cohen's estimations of power (presuming a medium effect size and an α of .05), a sample size of 60 is sufficient to detect significant differences between the categorical ratings for a given image (J. Cohen, 1992). In addition, an examination of the variability of our ratings (overall SD = 1.76), relative to the variability of Lang et al.'s (1999) valence and arousal ratings (overall SD = 1.90), indicates comparable variability between these two normative data sets.

ARCHIVED MATERIALS

The following materials and links may be accessed through the Psychonomic Society's Norms, Stimuli, and Data Archive, http://www.psychonomic.org/archive/. To access these files or links, search the archive for this article using the journal (Behavior Research Methods), the first author's name (Mikels), and the publication year (2005).

File: mikels-BRM-2005.zip

Description: The compressed archive file contains two files:

mikels2005negativenorms.txt, containing the norms developed by Mikels et al. (2005) as a tab-delimited text file generated by Microsoft Excel.

mikels2005positivenorms.txt, containing the norms developed by Mikels et al. (2005) as a tab-delimited text file generated by Microsoft Excel.

Authors's e-mail address: jmikels@psych.stanford.edu

Authors's Web site: http://psychology.stanford.edu/∼jmikels/

REFERENCES

  1. Bradley MM, Codispoti M, Cuthbert BN, Lang PJ. Emotion and motivation: I. Defensive and appetitive reactions in picture processing. Emotion. 2001;1:276–298. [PubMed] [Google Scholar]
  2. Bradley MM, Codispoti M, Sabatinelli D, Lang PJ. Emotion and motivation: II. Sex differences in picture processing. Emotion. 2001;1:300–319. [PubMed] [Google Scholar]
  3. Bradley MM, Lang PJ. Measuring emotion: Behavior, feeling, and physiology. In: Lane RD, Nadel L, editors. Cognitive neuroscience of emotion. Oxford University Press; New York: 2000. pp. 242–276. [Google Scholar]
  4. Cacioppo JT, Gardner WL. Emotions. Annual Review of Psychology. 1999;50:191–214. doi: 10.1146/annurev.psych.50.1.191. [DOI] [PubMed] [Google Scholar]
  5. Cohen J. A power primer. Psychological Bulletin. 1992;112:155–159. doi: 10.1037//0033-2909.112.1.155. [DOI] [PubMed] [Google Scholar]
  6. Cohen JD, MacWhinney B, Flatt M, Provost J. PsyScope: An interactive graphic system for designing and controlling experiments in the psychology laboratory using Macintosh computers. Behavior Research Methods, Instruments, & Computers. 1993;25:257–271. [Google Scholar]
  7. Cuthbert BN, Bradley MM, Lang PJ. Probing picture perception: Activation and emotion. Psychophysiology. 1996;33:103–111. doi: 10.1111/j.1469-8986.1996.tb02114.x. [DOI] [PubMed] [Google Scholar]
  8. Davidson RJ, Irwin W. The functional neuroanatomy of emotion and affective style. Trends in Cognitive Sciences. 1999;3:11–21. doi: 10.1016/s1364-6613(98)01265-0. [DOI] [PubMed] [Google Scholar]
  9. Davis WJ, Rahman MA, Smith LJ, Burns A, Senecal L, et al. Properties of the human affect induced by static color slides (IAPS): Dimensional, categorical and electromyographic analysis. Biological Psychology. 1995;41:229–253. doi: 10.1016/0301-0511(95)05141-4. [DOI] [PubMed] [Google Scholar]
  10. Diener E, Scollon CN, Lucas RE. The evolving concept of subjective well-being: The multifaceted nature of happiness. Advances in Cell Aging & Gerontology. 2003;15:187–219. [Google Scholar]
  11. Ekman P. An argument for basic emotions. Cognition & Emotion. 1992;6:169–200. [Google Scholar]
  12. Ekman P. Facial expression and emotion. American Psychologist. 1993;48:384–392. doi: 10.1037//0003-066x.48.4.384. [DOI] [PubMed] [Google Scholar]
  13. Engebretson TO, Sirota AD, Niaura RS, Edwards K, Brown WA. A simple laboratory method for inducing anger: A preliminary investigation. Journal of Psychosomatic Research. 1999;47:13–26. doi: 10.1016/s0022-3999(99)00012-4. [DOI] [PubMed] [Google Scholar]
  14. Fredrickson BL. The role of positive emotions in positive psychology: The broaden-and-build theory of positive emotions. American Psychologist. 2001;56:218–226. doi: 10.1037//0003-066x.56.3.218. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Gerrards-Hesse A, Spies K, Hesse FW. Experimental inductions of emotional states and their effectiveness: A review. British Journal of Psychology. 1994;85:55–78. [Google Scholar]
  16. Gross JJ, Levenson RW. Emotion elicitation using films. Cognition & Emotion. 1995;9:87–108. [Google Scholar]
  17. Izard CE. Human emotions. Plenum; New York: 1977. [Google Scholar]
  18. Keltner D, Ellsworth PC, Edwards K. Beyond simple pessimism: Effects of sadness and anger on social perception. Journal of Personality & Social Psychology. 1993;64:740–752. doi: 10.1037//0022-3514.64.5.740. [DOI] [PubMed] [Google Scholar]
  19. Lang PJ, Bradley MM, Cuthbert BN. Emotion, motivation, and anxiety: Brain mechanisms and psychophysiology. Biological Psychiatry. 1998;44:1248–1263. doi: 10.1016/s0006-3223(98)00275-3. [DOI] [PubMed] [Google Scholar]
  20. Lang PJ, Bradley MM, Cuthbert BN. International affective picture system (IAPS): Technical manual and affective ratings. University of Florida, Center for Research in Psychophysiology; Gainesville: 1999. [Google Scholar]
  21. Lang PJ, Greenwald MK, Bradley MM, Hamm AO. Looking at pictures: Affective, facial, visceral, and behavioral reactions. Psychophysiology. 1993;30:261–273. doi: 10.1111/j.1469-8986.1993.tb03352.x. [DOI] [PubMed] [Google Scholar]
  22. Lazarus R. Emotion and adaptation. Oxford University Press; New York: 1991. [Google Scholar]
  23. Lerner JS, Keltner D. Beyond valence: Toward a model of emotion-specific influences on judgment and choice. Cognition & Emotion. 2000;14:473–493. [Google Scholar]
  24. Lerner JS, Keltner D. Fear, anger, and risk. Journal of Personality & Social Psychology. 2001;81:146–159. doi: 10.1037//0022-3514.81.1.146. [DOI] [PubMed] [Google Scholar]
  25. Levenson RW. Autonomic specificity and emotion. In: Davidson RJ, Scherer KR, Goldsmith HH, editors. Handbook of affective sciences. Oxford University Press; New York: 2003. pp. 212–224. [Google Scholar]
  26. Mehrabian A, Russell JA. An approach to environmental psychology. MIT Press; Cambridge, MA: 1974. [Google Scholar]
  27. Rozin P, Royzman E. Negativity bias, negativity dominance, and cognition. Personality & Social Psychology Review. 2001;5:296–320. [Google Scholar]
  28. Smith CA, Ellsworth PC. Patterns of cognitive appraisal in emotion. Journal of Personality & Social Psychology. 1985;48:813–838. [PubMed] [Google Scholar]
  29. Smith CA, Ellsworth PC. Patterns of appraisal and emotion related to taking an exam. Journal of Personality & Social Psychology. 1987;52:475–488. doi: 10.1037//0022-3514.52.3.475. [DOI] [PubMed] [Google Scholar]
  30. Tomkins SS. Affect, imagery, consciousness. 1-4. Springer; New York: 1962. [Google Scholar]
  31. Yik MSM, Russell JA, Barrett LF. Structure of self-reported current affect: Integration and beyond. Journal of Personality & Social Psychology. 1999;77:600–619. [Google Scholar]

RESOURCES