Abstract
Recent studies have documented robust and intriguing associations between affect and performance in cognitive tasks. The present two experiments sought to extend this line of work with reference to potential cross-modal effects. Specifically, the present studies examined whether word evaluations would bias subsequent judgments of low- and high-pitch tones. Because affective metaphors and related associations consistently indicate that positive is high and negative is low, we predicted and found that positive evaluations biased tone judgment in the direction of high-pitch tones, whereas the opposite was true of negative evaluations. Effects were found on accuracy rates, response biases, and reaction times. These effects occurred despite the irrelevance of prime evaluations to the tone judgment task. In addition to clarifying the nature of these cross-modal associations, the present results further the idea that affective evaluations exert large effects on perceptual judgments related to verticality.
Associations between affect and perception appear frequently in colloquial language. For example, “up” feelings are those that are pleasant, whereas “down” feelings are those that are unpleasant. A “high” grade is a desirable grade, whereas a “low” grade is an undesirable grade. Similarly, with respect to the auditory modality, conventional wisdom suggests that high tones and ascending melodies are likely to elevate the mood of the listener, whereas low tones and descending melodies are likely to depress the mood of the listener (for a review, see Gabrielsson & Lindström, 2001).
It is easy to compile an extensive list of phrases implying systematic associations between affect and verticality (Kövecses, 2000; Schwartz, 1981). However, scholars have disagreed on whether conventional metaphors of this type actually influence online representational processes (for a review, see Gibbs, 1994). Arguing in favor of this idea, Lakoff and Johnson (1999) suggested that abstract thoughts, such as those related to affect, utilize some of the same neural circuits that are also involved in cognitive representations of perceptual stimuli, including those related to verticality. This suggestion appears compatible with recent theory and data related to the embodied nature of representation (see, e.g., Barsalou, 1999; Glenberg, 1997).
In support of the idea that affect and processes related to vertical representation are systematically related, Stepper and Strack (1993) found that an upright posture led to a greater experience of pride than a slumped posture. In relation to the opposite direction of influence—namely from evaluations to verticality-related judgments—there are a number of suggestive findings. For example, Wapner, Werner, and Krus (1957) manipulated mood states and found that happier mood states were associated with line bisections that were higher within vertical space. Other relevant findings, reviewed by Meier and Robinson (2005), similarly suggest that affective evaluations prime vertical representations in the visuospatial domain.
Affect and Verticality in Tone Perception
Although auditory tones do not have an inherent verticality to them, Melara and O’Brien (1987) found that high-pitch (HP) tones were categorized faster in the presence of an upper-space visual cue, whereas low-pitch (LP) tones were categorized faster in the presence of a lower-space visual cue. This systematic relation between high and low tones and high and low regions of visual space appears to involve preverbal associations, since infants less than a year old preferentially look at an upward arrow following an ascending tone sequence, but preferentially look at a downward arrow following a descending tone sequence (Wagner, Winner, Cicchetti, & Gardner, 1981).
These associations between pitch and verticality on the one hand (see, e.g., Melara & O’Brien, 1987)—and between verticality and affective metaphor on the other (e.g., Meier & Robinson, 2004)—motivated the present experiments, which were designed to examine whether affective evaluations might bias auditory perceptual judgments. In support of this cross-model mapping between evaluation and pitch, Hevner (1937, according to Gabrielsson & Lindström, 2001) asked listeners to characterize short pieces of music varying in tempo and pitch on a number of verbal dimensions, including those related to the mood of the pieces. Participants associated the LP selections with sad mood and darkness, whereas they associated the HP selections with happy mood and brightness.
Perhaps of more relevance to the cognitive methods used here are developmental data showing that 7-month-old infants associate a sequence of ascending (descending) tones with a joyful (sad) mood, as inferred from their preferential looking at a face with the respective emotional expression (Phillips, Wagner, Fells, & Lynch, 1990). Research has also shown that when children are asked to sing a song in either a happy or sad mood, they sing happy songs louder, faster, and at a higher pitch than sad songs (Adachi & Trehub, 1998). Of most importance here are the pitch findings, again suggesting a systematic relationship between affect and pitch.
On the basis of prior results related to affect and verticality (reviewed in Meier & Robinson, 2005), we predicted that affective primes would systematically bias tone classifications, with positive words priming HP tones and negative words priming LP tones.
Two experiments introduced affective primes by having participants verbally evaluate positive (e.g., love) and negative (e.g., nasty) words that were shown at the center of a computer screen. Subsequent to such evaluations, participants received either an HP or an LP tone over headphones. Participants were merely asked to indicate which tone they heard. Experiment1 examined whether affective evaluations prime tone classification in a manner consistent with spatial metaphor. Experiment 2 extended Experiment 1 methodologically in that it used a signal detection design to examine whether affective priming influences tone perception or whether it biases responses.
EXPERIMENT 1
Experiment 1 was designed to investigate cross-modal priming effects related to affect and tone perception. Each trial involved a randomly selected positive or negative word, followed by a randomly selected LP or HP tone. Thus, there was no predictive relationship between the affective value of the prime and the to-be-classified target. Nevertheless, if affective value is related to verticality, then positive primes should facilitate HP-tone classification and negative primes should facilitate LP-tone classification.
Method
Participants
Participants were 20 undergraduates who received extra credit.
Materials
Positive and negative word primes
A total of 100 words that had been used in previous studies (e.g., Meier & Robinson, 2004) served as primes. Fifty of them had a positive meaning (e.g., kiss), whereas the other fifty had a negative meaning (e.g., dead). The average number of letters per word was similar for each valence (F <1). Eight individuals rated the valence of the words. Positive words were rated as more positive than the negative words [F(1,98) = 1,040.44, p <.001, ]. In addition, the absolute difference between the rating of each word and the neutral midpoint was similar for each valence (F < 1).
HP and LP tones
Two different 100-msec tones served as target stimuli. One had a pitch of 500Hz (LP tone) and the other had a pitch of 2000 Hz (HP tone). Ten participants (that were not included in any of the experiments) rated the extent to which each tone was pleasant (1 = extremely unpleasant, 7 = extremely pleasant). Participants rated the tones as equally pleasant (t < 1). The fact that the tones were equally pleasant rules out potential confounds related to the affect induced by the tones.
Procedure
Participants were told that the task was concerned with their ability to evaluate words as having either a bad or good meaning while performing an intervening task. To decrease the possibility of response-compatibility contributions to performance, we required a verbal response to affective primes and a manual response to tone targets. Participants wore headphones with a boom microphone. Each of the 100 prime words was presented individually at the center of a computer screen one at a time in 18-point Arial font. White font color and black background color were used. Participants were instructed to verbally evaluate each word by saying “bad” for words with a bad meaning and “good” for words with a good meaning.
One hundred fifty milliseconds after each evaluation, participants were presented with one of the tones. They were asked to press the “1” button on a response box if they heard “Tone A” (the HP tone) and to press the “5” button on a response box if they heard “Tone B” (the LP tone). Participants were given examples of each tone type before the task began. No mention was made of a high or low tone; tones were simply described as A or B. If participants were incorrect on their tone judgments, the word “incorrect” was shown for 1.5 sec. If the tone judgment was correct, there was a 250-msec blank interval before the presentation of the next affective prime. Participants evaluated each prime word twice, for a total of 200 trials. The accuracy and the reaction time (RT) of tone judgments were measured.
Results and Discussion
In both experiments, subject-level analyses are noted by a subscript of “1,” whereas item-level analyses are noted by a subscript of “2.” Participants were inaccurate on 6.6% of the tone classification trials, and error rates were subjected to a 2 (prime type: positive vs. negative) × 2 (target type: Tone A vs. Tone B) ANOVA. There was no reliable effect of prime type [F1(1,19) = 1.44, p =.245, ; F2(1,98) = 1.16, p =.285, ] and target type [F1(1,19) = 1.47, p =.240, ; F2(1,98) = 2.68, p =.105, ], but there was a prime type × target type interaction [F1(1,19) = 19.60, p <.001, ]. This interaction was also significant over items [F2(1,98) = 74.79, p <.001, ]. As shown in Table 1, participants were more accurate in recognizing the HP tone following positive (vs. negative) primes [t1(19) = 3.90, p =.001, d = 1.33; t2(98) = 6.25, p <.001, d = 1.25], whereas participants were more accurate in recognizing the LP tone following negative (vs. positive) primes [t1(19) = 4.19, p =.001, d = 1.24; t2(98) = 6.69, p <.001, d = 1.18].
Table 1.
Prime Affect |
||||||||
---|---|---|---|---|---|---|---|---|
Positive |
Negative |
|||||||
Tone Type | RT | SE | Error | SE | RT | SE | Error | SE |
Low pitch | 695 | 35 | 11.5 | 1.9 | 625 | 34 | 3.1 | 0.8 |
High pitch | 599 | 29 | 2.6 | 0.6 | 677 | 34 | 9.3 | 1.5 |
Note—RT, reaction time; SE, standard error of the mean.
Although the accuracy rates were very high, we conducted a signal detection analysis to determine the nature of affective priming—that is, whether it influenced sensory tone perception or induced a bias to respond to tones in a manner consistent with affective metaphor. Hits and false alarms were examined as a function of affective priming and parameters indexing sensory perception (d′) and response bias (c) were computed. Estimates of d′ were large following both positive (3.37) and negative (3.48) affective primes, and d′ did not differ by prime type (t <1). These large d′ values are primarily due to the relatively high accuracy rates in this task. Affective priming influenced response bias, however. Specifically, a positive prime biased participants toward HP responses [c = 34, t(19) = 4.5, p <.001], and a negative prime biased participants toward LP responses [c = −.28, t(19) = 3.69, p =.002]. One source of the affective priming effect was thus relatively clear cut: Priming induced a bias toward responding in an affective metaphor consistent manner. The d′ data also suggested that it did not influence acoustic perception, although this finding must be considered tentative in view of the relatively high accuracy rate.
Classification latencies were computed for correct responses, and outlying RT values were replaced with a 2.5 SD cutoff value. RT data were log transformed for the analyses. A 2 (prime type: positive vs. negative) × 2 (target type: Tone A vs. Tone B) ANOVA revealed that latencies were equally fast following positive and negative prime words (Fs < 1.4). Participants were faster to detect the HP tone (M = 638 msec; SD = 133 msec) than the LP tone (M = 660 msec; SD = 149 msec) [F1(1,18) = 8.11, p =.010; F2(1,98) = 5.72, p =.019, ]. Critically, there was an interaction between prime type and target type [F1(1,19) = 32.59, p <.001, ; F2(1,98) = 67.88, p <.001, ]. As shown in Table 1, participants were faster to recognize the HP tone following positive rather than negative primes [t1(19) = 4.30, p <.001, d =.68; t2(98) = 6.35, p <.001, d = 1.27]; conversely, they were faster to recognize the LP tone following negative rather than positive primes [t1(19) = 5.44, p <.001, d =.53; t2(98) = 5.21, p <.001, d = 1.04]. The accuracy and latency data thus converge, indicating that positive affective evaluations facilitated the classification of HP tones and negative evaluations facilitated the classification of LP tones. Signal detection analyses further indicated that this facilitation was due to a response bias rather than an influence of the affective prime on acoustic perception.
EXPERIMENT 2
Experiment 2 pursued two main goals. First, it sought to confirm one key finding of Experiment 1, the absence of an affective bias effect on acoustic perception (d′). Since the accuracy of tone classification was near ceiling levels in Experiment 1, Experiment 2 sought to make the detection task more difficult by reducing the acoustic difference between HP and LP tones. Second, Experiment 2 examined whether affective priming is primarily due to the affective valence of positive or negative primes. Thus, Experiment 2 included a neutral prime condition that provided a baseline for the assessment of the relative size of positive and negative affective priming effects.
Method
Participants
Participants were 50 undergraduates.
Materials
Neutral words
Positive and negative words were identical to those presented in Experiment1. Fifty neutral words were added from the normed list of Bradley and Lang (1999), in such a way that neutral words were at the midpoint of the 1–9 valence scale used by Bradley and Lang (M = 5.05; SD =.21).
Procedure
The procedure of Experiment 2 was quite similar to that of Experiment 1, except that positive, neutral, and negative primes were used and randomly assigned to trials in Experiment 2. In the case of neutral primes, participants were instructed to say “neutral” rather than “good” or “bad.” In addition, the tone discrimination task was made more difficult by using tones of 500 Hz (LP) and 800 Hz (HP). Each of the 150 primes was shown once, for a total of 150 trials.
Results and Discussion
Data from two participants were excluded from all analyses, since their accuracy was at a chance level. The error rate of the remaining participants was higher (though not drastically) than that in Experiment1 (M = 8.9%). A 3 (prime type: positive, neutral, or negative) × 2 (target type: Tone A or ToneB) ANOVA of error rates did not reveal main effects of prime type or of target type (Fs < 1), but the prime type × target type interaction was once more reliable [F1(2,94) = 24.30, p <.001, ; F2(2,147) = 44.73, p <.001, ]. As shown in Table 2, participants were more accurate at detecting HP tones following positive rather than negative words [t1(47) = 5.41, p <.001, d =.82; t2(98) = 6.59, p <.001, d = 1.32] and positive rather than neutral words [t1(47) = 2.69, p =.01, d =.45; t2(98) = 4.20, p <.001, d =.84], whereas participants were more accurate at detecting LP tones following negative rather than positive words [t1(47) = 5.70, p <.001, d =.81; t2(98) = 7.13, p <.001, d = 1.43] and negative rather than neutral words [t1(47) = 1.94, p =.058, d =.26; t2(98) = 1.76, p =.081, d =.35].
Table 2.
Prime Affect |
||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Positive |
Neutral |
Negative |
||||||||||
Tone Type | RT | SE | Error | SE | RT | SE | Error | SE | RT | SE | Error | SE |
Low pitch | 725 | 27 | 11.8 | 1.5 | 757 | 29 | 6.9 | 1.2 | 709 | 28 | 4.9 | 0.9 |
High pitch | 684 | 25 | 4.6 | 1.0 | 738 | 27 | 8.6 | 1.5 | 774 | 26 | 11.4 | 1.3 |
Note—RT, reaction time; SE, standard error of the mean.
A signal detection analysis was applied to hits and false alarms to compute estimates of sensitivity (d′) and response bias (c). Sensitivity indices did not vary by valence (ts < 1.25, ps >.20). However, responses were biased toward HP tones following positive primes (c =.29) in comparison with neutral primes [c = −.03, t(47) = 4.86, p <.001]. Conversely, responses were biased toward LP tones following negative primes (c = −.25) in comparison with neutral primes [c = −.03, t(47) = 3.38, p <.05].
In the RT analyses, outliers greater than 2.5 SDs were again replaced with the 2.5 SD value, and RT data were log transformed. A 3 (prime type: positive, neutral, or negative) × 2 (target type: Tone A or Tone B) ANOVA revealed no effect of target type (Fs < 1). Thus, HP and LP tones were classified with equal speed. Tone classification was influenced, however, by prime type [F1(2,94) = 6.16, p <.005, ; F2(2,147) = 6.98, p =.001, ], with faster tone classification following positive primes (M = 705 msec; SD = 179 msec) than neutral primes (M = 748 msec; SD = 194 msec) and negative primes (M = 742 msec; SD = 187 msec). Critically, the interaction of prime type and target type was significant [F1(2,94) = 19.39, p <.001, ; F2(2,147) = 15.06, p <.001, ]. As shown in Table 2, participants were faster at detecting the HP tone following positive rather than negative primes [t1(47) = 5.65, p <.001, d =.56; t2(98) = 6.64, p <.001, d = 1.33], and following positive rather than neutral primes [t1(47) = 4.00, p <.001, d =.37; t2(98) = 3.98, p <.001, d =.79]. Conversely, they were faster at detecting the LP tone following negative rather than positive primes [t1(47) = 2.15, p =.037, d =.16; t2(98) = 1.74, p =.085, d =.35], and negative rather than neutral primes [t1(47) = 2.84, p =.007, d =.22; t2(98) = 2.15, p =.034, d =.43].
GENERAL DISCUSSION
The present studies revealed a novel effect of affective primes on the classification of tones, with positive primes facilitating the classification of HP tones and negative primes facilitating the classification of LP tones. Both types of primes were equally effective in comparison with a neutral baseline condition. Signal detection analyses further indicated that the facilitatory effect of affective primes was due to a biasing of responses rather than an enhancement of acoustic processes.
A prominent class of affective metaphors maps valence onto vertical space, in such a way that good things are high and bad things are low. Robust affective priming effects in Experiments 1 and 2 are in harmony with this position. Moreover, the present work provides a critical extension of the affective priming effect, in that the mapping of affect onto verticality did not involve any spatial processing, as was the case in earlier studies (e.g., Meier & Robinson, 2004). Rather, it involved the judgment of tonal quality. The similarity of affective priming effects across visual and auditory modalities is far from trivial, since the relative “height” of a tone is qualitatively different from the relative height of a visual stimulus.
The affective priming effects of Experiments 1 and 2 were predicted from the affective metaphor hypothesis (Meier & Robinson, 2005). Affective metaphor provides a “deep” basis for mappings between affect and verticality, and it can account for the consistent linkage of vertical metaphor to perceptual experiences, even across cultures (Kövecses, 2000; Lakoff & Johnson, 1999; Schwartz, 1981). Cross-modal affective priming—as obtained in the present study—was obtained for tones that did not have an affective valence. Specifically, the tones presented in Experiments1 and 2 were devoid of any subjective nuance suggestive of an affective valence, as indicated by the fact that pilot participants did not rate HP tones as more pleasant than LP tones. Instead, the results highlight the unique capacity of affective evaluations to bias subsequent perceptual judgments in a manner consistent with affective metaphor (Meier & Robinson, 2004, 2005; Meier, Robinson, & Clore, 2004). This asymmetry is consistent with the notion that affect borrows from the perceptual domain rather than the other way around, a central tenet of the metaphor representation perspective (Gibbs, 1994; Lakoff & Johnson, 1999; Meier & Robinson, 2005).
However, the results of the present experiments are also consistent with a theoretical conception, according to which affective priming emerges from culturally shared associations that map both affect and tones onto a vertical spatial dimension (see Grady, 2005). Language assigns spatial attributes to pitch, and the tones in Experiments 1 and 2 could have been labeled high or low, even though we encouraged participants to use neutral labels by asking them to distinguish between Tones A and B. The dissociation of affective priming that is due to affective metaphor and culturally shared associations, in that order, is likely to be difficult, and it is beyond the scope of the present investigation.
The Locus of cross-Modal Priming
Affective priming could facilitate the perception of acoustic signals by using affect to allocate attention, or it could influence response selection by using affect to prepare a particular response. It could also do both—facilitate acoustic perception and bias response selection. The two experiments used signal detection methods to determine the locus of affective priming. The results indicate that affective priming influenced response selection, but did not influence perception independent of such response biases.
Other research suggests that perceptual processes are relatively insulated from top-down influences related to meaning (Pashler, Johnston, & Ruthruff, 2001), which may explain why affective evaluations are more likely to prime responses rather than perceptions (Klauer & Musch, 2003; Storbeck, Robinson, & McCourt, 2006). As such, we agree with Grady (2005) that people’s response biases are very much indicative of the metaphorical nature of perceptual responses.
Although the present effects relate to response biases, we find it very unlikely that there were demand characteristics involved. Affective primes and tone targets were randomly assigned to particular trials, meaning that there could be no strategic advantage to predicting pitch on the basis of the nature of the affective prime. Also, the response–stimulus interval between affective evaluations and tone targets was very short (150 msec) and consistent with prior studies of automatic priming effects (e.g., McRae & Boisvert, 1998). Finally, such associations appear to be operative very early in life (Phillips et al., 1990). We therefore suggest that affective primes biased tone perception in a manner that was nonstrategic and likely nonconscious in nature.
Conclusions
The present experiments drew from affective metaphor theory (see, e.g., Meier & Robinson, 2005) in predicting that affective evaluations would bias subsequent tone categorization. Specifically, we predicted that positive (vs. negative) evaluations would bias individuals to believe that they were hearing HP (vs. LP) tones. Two experiments confirmed this hypothesis, and did so in relation to multiple dependent measures. Whether these effects are viewed in terms of affective metaphor or affective associations, the robust nature of these results should be regarded as novel in relation to affective priming effects on perceptual judgments.
Acknowledgments
We thank Arthur Glenberg, Richard Pastore, Matthew Solomon, and two anonymous reviewers for their helpful comments on an earlier version of this article.
Contributor Information
Ulrich W. Weger, State University of New York, Binghamton, New York
Brian P. Meier, Gettysburg College, Gettysburg, Pennsylvania
Michael D. Robinson, North Dakota State University, Fargo, North Dakota
Albrecht W. Inhoff, State University of New York, Binghamton, New York
References
- Adachi M, Trehub SE. Children’s expression of emotion in song. Psychology of Music. 1998;26:133–153. [Google Scholar]
- Barsalou LW. Perceptual symbol systems. Behavioral & Brain Sciences. 1999;22:577–660. doi: 10.1017/s0140525x99002149. [DOI] [PubMed] [Google Scholar]
- Bradley MM, Lang PJ. Affective norms for English words (ANEW) Gainesville: University of Florida, NIMH Center for the Study of Emotion and Attention.; 1999. [Google Scholar]
- Gabrielsson A, Lindström E. The influence of musical structure on emotional expression. In: Juslin PN, Sloboda JA, editors. Music and emotion: Theory and research. Oxford: Oxford University Press; 2001. pp. 223–248. [Google Scholar]
- Gibbs RW., JR . The poetics of mind: Figurative thought, language, and understanding. Cambridge: Cambridge University Press.; 1994. [Google Scholar]
- Glenberg AM. What memory is for. Behavioral & Brain Sciences. 1997;20:1–55. doi: 10.1017/s0140525x97000010. [DOI] [PubMed] [Google Scholar]
- Grady J. Primary metaphors as inputs to conceptual integration. Journal of Pragmatics. 2005;37:1595–1614. [Google Scholar]
- Klauer KC, Musch J. Affective priming: Findings and theories. In: Musch J, Klauer KC, editors. The psychology of evaluation: Affective processes in cognition and emotion. Mahwah, NJ: Erlbaum; 2003. pp. 7–49. [Google Scholar]
- Kövecses Z. Metaphor and emotion: Language, culture, and body in human feeling. Cambridge: Cambridge University Press; 2000. [Google Scholar]
- Lakoff G, Johnson M. Philosophy in the flesh: The embodied mind and its challenges to Western thought. New York: Basic Books; 1999. [Google Scholar]
- Mcrae K, Boisvert S. Automatic semantic similarity priming. Journal of Experimental Psychology: Learning, Memory, & Cognition. 1998;24:558–572. [Google Scholar]
- Meier BP, Robinson MD. Why the sunny side is up: Associations between affect and vertical position. Psychological Science. 2004;15:243–247. doi: 10.1111/j.0956-7976.2004.00659.x. [DOI] [PubMed] [Google Scholar]
- Meier BP, Robinson MD. The metaphorical representation of affect. Metaphor & Symbol. 2005;20:239–257. [Google Scholar]
- Meier BP, Robinson MD, CLORE GL. Why good guys wear white: Automatic inferences about stimulus valence based on brightness. Psychological Science. 2004;15:82–87. doi: 10.1111/j.0963-7214.2004.01502002.x. [DOI] [PubMed] [Google Scholar]
- Melara RD, O’Brien TP. Interaction between synesthetically corresponding dimensions. Journal of Experimental Psychology: General. 1987;116:323–336. [Google Scholar]
- Pashler H, Johnston JC, Ruthruff E. Attention and performance. Annual Review of Psychology. 2001;52:629–651. doi: 10.1146/annurev.psych.52.1.629. [DOI] [PubMed] [Google Scholar]
- Phillips RD, Wagner SH, Fells CA, Lynch M. Do infants recognize emotion in facial expressions? Categorical and “metaphorical” evidence. Infant Behavior & Development. 1990;13:71–84. [Google Scholar]
- Schwartz B. Vertical classification: A study in structuralism and the sociology of knowledge. Chicago: University of Chicago Press; 1981. [Google Scholar]
- Stepper S, Strack F. Proprioceptive determinants of emotional and nonemotional feelings. Journal of Personality & Social Psychology. 1993;64:211–220. [Google Scholar]
- Storbeck J, Robinson MD, Mccourt ME. Semantic processing precedes affect retrieval: The neurological case for cognitive primacy in visual processing. Review of General Psychology. 2006;10:41–55. [Google Scholar]
- Wagner S, Winner W, Cicchetti D, Gardner H. “Metaphorical” mapping in human infants. Child Development. 1981;52:728–731. [Google Scholar]
- Wapner S, Werner H, Krus DM. The effect of success and failure on space localization. Journal of Personality. 1957;25:752–756. doi: 10.1111/j.1467-6494.1957.tb01563.x. [DOI] [PubMed] [Google Scholar]