Abstract
According to a prevailing view, the visual system works by dissecting stimuli into primitives, whereas the auditory system processes simple and complex stimuli with their corresponding features in parallel. This makes musical stimulation particularly suitable for patients with disorders of consciousness (DoC), because the processing pathways related to complex stimulus features can be preserved even when those related to simple features are no longer available. An additional factor speaking in favor of musical stimulation in DoC is the low efficiency of visual stimulation due to prevalent maladies of vision or gaze fixation in DoC patients. Hearing disorders, in contrast, are much less frequent in DoC, which allows us to use auditory stimulation at various levels of complexity. The current paper overviews empirical data concerning the four main domains of brain functioning in DoC patients that musical stimulation can address: perception (e.g., pitch, timbre, and harmony), cognition (e.g., musical syntax and meaning), emotions, and motor functions. Music can approach basic levels of patients’ self-consciousness, which may even exist when all higher-level cognitions are lost, whereas music induced emotions and rhythmic stimulation can affect the dopaminergic reward-system and activity in the motor system respectively, thus serving as a starting point for rehabilitation.
Keywords: consciousness, DoC, music, rehabilitation, psychology, neurophysiology
Introduction
The aim of the present paper is to show that music is a particular kind of auditory stimulation that may be most beneficial for use in patients with disorders of consciousness (DoC) in both research and therapy. With respect to therapy, the enormous complexity of such studies partly accounts for the currently low number of well-controlled trials and hence the limited demonstration of evidence-based effects of music therapy in DoC (see Giacino et al., 2012). However, one-time experimental interventions using musical stimuli yielded promising results in a few studies with middle-sized DoC samples (e.g., Formisano et al., 2001; O’Kelly and Magee, 2013; Magee and O’Kelly, 2015). Less clear-cut are the data of music therapy interventions, which are summarized in Table 1. As can be seen in the table, only three studies (Formisano et al., 2001; Raglio et al., 2014; Sun and Chen, 2015) tested the effects of musical therapy using 10 or more DoC patients. Only the last one employed a sufficient level of control and showed some promising results. However, these data are in need of replication.
TABLE 1.
Source | Participants | Design | Outcome |
---|---|---|---|
Formisano et al. (2001) | Thirty-four MCS patients, 13–70 years, M = 35.94; 18 TBI, 16 non-TBI | Music therapy program included singing or playing different musical instruments.Three 20–40 min sessions per week during 2 months. | Decreasing in inertia or psychomotor agitation in 21 patients.No significant change of CRS scores. |
Magee (2005) | One VS patient, >50 years old anoxic brain injury | Music therapy program with singing and playing musical pieces. Music selection based on the participant’s life history.No information about the duration of the program. | The patient demonstrated some behavioral responses in response to music and song exposition.No information about changes in objective measures. |
Raglio et al. (2014) | Four MCS and six VS patients (five with anoxic brain injury, four hemorrhage, one TBI) | Music therapy included two cycles of 15 sessions (three sessions/week, 30 min each). The cycles spaced out by 2 weeks. | Improvements of some observed behaviors in MCS patients: eye contacts, smiles, communicative use of instruments/voice, reduction of annoyance, and suffering expressions. VS patients only increased eye contacts. |
Seibert et al. (2000) | One MCS patient, 20 years old after severe hypothermia, cardiac arrest, and brain anoxia; GCS score – 12Rancho Los Amigos Scale – 4 | Music therapy program involved exposure to oboe music, physical contact with the instrument, and the presentation of favorite music during 2.5 years. | At the end of the program: GCS score – 15, Rancho Los Amigos Scale – 6; Persisting moderate deficits in orientation/attention, visual-spatial skills, memory, and language. Reading comprehension and ability to follow commands were at a moderate level. |
Lee et al. (2011) | One VS patient, age 45 yearsIntracerebral hemorrhageGCS score – 4 | ECG data collected during 7 weeks. First week: six baseline sessions with no music, each lasting for 180 min. Next 6 weeks: six music sessions when the patient listened to Mahler’s symphony no. 2, each session lasted for 210 min. | Changes in the standard deviation of time sequences showed positive changes in the cardiovascular system. |
Steinhoff et al. (2015) | Four VS patients after cardiopulmonary resuscitation | Music therapy group (n = 2): standard care plus live and individual music therapy sessions for 5 weeks (three sessions/week, about 27 min each). Control group (n = 2): only standard care.PET in the baseline in rest state; PET at the end of the second and sixth weeks in response to musical stimulation (both in the music and control groups). | Patients in the music therapy group appeared to show higher brain activity than control group patients in the last PET scan. |
Sun and Chen (2015) | Forty TBI coma patients, 18–55 years oldGCS score between 3 and 86.55 ± 2.82 days after injury | Music therapy group (n = 20): listening to their favorite and familiar music for 15–30 min three times every day during 4 weeks. Control group (n = 20): waiting control. | GCS scores increased significantly in both groups, yet significantly more in the music therapy group. Relative power of slow EEG rhythms decreased in both groups, yet these changes were significantly stronger in the music therapy group. |
CRS, Coma Recovery Scale; ECG, electrocardiography; EEG, electroencephalography; GCS, Glasgow Coma Scale; MCS, minimally conscious state, PET, positron emission tomography; TBI, traumatic brain injury; VS, vegetative state.
In contrast to therapeutic effects in DoC, we can draw on a large number of studies that examined the highly specific effects of music on basic perceptual, higher-cognitive, and emotional processes in the brain of healthy subjects, and derive suggestions for their use in DoC. In this review, we will concentrate on features of music that play, or can play, a significant role in the examination and/or rehabilitation of chronic DoC. We do not present a comprehensive review on music perception and cognition but rather intend to analyze the potential and applicability of music stimulation in DoC.
This review starts with some fundamental reasons why auditory stimulation might be particularly useful in DoC. We then first provide essential information about the neural specializations of auditory processing (e.g., basic sensory and sensorimotor mechanisms) before describing higher-level perceptual organization of sound, including the neural differences associated with the processing of musical syntax and semantics. After we moved on to discuss the potential benefits of multisensory stimulation in DoC, we finally provide evidence and suggestions for the use of musical stimulation as a therapeutic tool with respected to effects on cognition, emotion, and stress in DoC. The scheme we adopted throughout all sections is to first describe how healthy subjects respond to music before reviewing the evidence-based practice or potential application of music stimulation in chronic DoC.
Why Auditory Stimulation in DoC?
Many DoC patients cannot see. Andrews et al. (1996) indicated in their frequently cited article that blindness is a major issue contributing to the exceptionally high rate of misdiagnosis in DoC: “The very high prevalence of severe visual impairment… is an additional complicating factor since clinicians making the diagnosis of the vegetative state place great emphasis on the inability of the patient to visually track or blink to threat” (p. 15). Moreover, even if both sensory pathways from the retina to the visual cortex and the cortical centers themselves are intact, this does not indicate that a DoC patient can see, as the role of motor control in visual perception is vital. To perceive anything more than just light, not only must the eyelids be open but also the ocular muscles and their controlling brain areas must be able to perform following and searching saccadic movements, a skill that is drastically reduced in vegetative state (VS) and also severely impaired in minimally conscious state. Conversely, the ability to consistently perform following gaze movements is considered a criterion to rule out a DoC diagnosis, whereas inconsistent followings may be compatible with the diagnosis the minimally conscious state (MSC+; Bruno et al., 2011). In an unpublished pilot study, we examined electroencephalography (EEG) responses to visual stimuli as simple as checkerboard patterns in five patients who fulfilled the diagnostic criteria of MCS+ according to Bruno et al. (2011). We failed to record a consistent evoked potential (EP) in any of them, although EPs to simple flash as well as the primary EP complex (P1–N1–P2) to auditory stimuli were virtually normal.
The situation seems indeed to be completely different in the auditory modality, not only because ears cannot be physically closed like eyes but also because active voluntary control of peripheral muscles is not vital for immediate sound sensation, although motor and corresponding somatosensory factors are of great importance in the perception of complex auditory stimulation (see below). We could not find data about the prevalence of lacking brain stem auditory EPs (BSAEP) in DoC, perhaps because the presence of this response is an inclusion criterion in most studies and therefore patients without BSAEP would be excluded from the very beginning. It follows that studies in DoC should not only provide detailed exclusion criteria with respect to auditory EPs but also how many patients were effectively excluded from the sample based on these rules. In fact, auditory EPs are frequently used in ENT clinics to distinguish between normal or hearing-impaired states in otherwise healthy infants (Paulraj et al., 2015). Among 83 VS patients with at least partially preserved BSAEP, 71 patients (i.e., 86%) also showed cortical EP components (as a rule, N1). If we introduce a further criterion and eliminate 10 VS patients with large-amplitude diffuse delta waves dominating the EEG all the time, only two patients (2.7%) with BSAEP would not show cortical EPs. All 49 examined MCS patients exhibited cortical auditory EPs. A subsample of this patient group (i.e., 50 VS and 39 MCS patients) has been reported in detail elsewhere (Kotchoubey, 2005; Kotchoubey et al., 2005). Notably, we observed a highly significant N1 component to complex tonal stimuli and even highly differentiated responses to speech (Kotchoubey et al., 2014) in patients with anoxic brain injury up to 11 years in the VS with Level 4 brain atrophy according to the classification of Galton et al. (2001) and Bekinschtein et al. (2009). Moreover, about half of the DoC patients without a specific lesion of the right temporal lobe exhibited significant responses to affective prosody (exclamations like “wow,” “ooh,” etc.: Kotchoubey et al., 2009). Taken together, deafness does not seem to be a major problem in most DoC patients. If deafness should be present, however, it is usually detected at very early stages of the disease because BSAEP are routinely recorded from the very beginning in most German hospitals for neurological rehabilitation. The cases of cortical deafness in DoC seem to be rare. If, as suggested in a stepwise procedure (Kübler and Kotchoubey, 2007; Kotchoubey et al., 2013), we first exclude patients without brain stem EP and patients with diffuse delta activity (the two groups usually overlap strongly), cortical auditory EPs can be obtained in nearly every DoC patient. Therefore, we suggest to use complex tonal stimuli for auditory EPs as a rule and the use of music therapy only in DoC patients with preserved neurophysiological findings [e.g., brain-stem and middle-latency auditory EPs and event-related potential (ERP)].
Neural Specializations for Auditory Processing
Basic Considerations
The oscillatory structure of acoustic events can be conceptualized as two perceptually quite distinct components: one that consists of higher frequencies, which provide the basis of pitch and timbre perception, and one that consists of lower frequencies, which provide the basis of musical rhythm and meter perception (i.e., the temporal organization of sounds). According to a well justified (although not yet in all respects empirically tested) hypothesis, this distinction has been related to two discrete anatomical and physiological components of the auditory system that have been classically described in the neurophysiology of afferent systems as specific versus non-specific, or lemniscal versus extralemniscal subsystems (e.g., Abrams et al., 2011).
Anatomically, the auditory cortex is subdivided into the primary cortex, or A1 [Brodmann area (BA) 41], the belt, or A2 (BA 42), and the parabelt, or A3 (BA22). The belt extends from inside the lateral sulcus or the supratemporal plane out onto the open surface of the superior temporal gyrus (STG) and receives independent input from the superior colliculus separately from A1 (Pandya, 1995). Neurons in the ventral part of the medial geniculate body (MGB) terminate in deeper layers (mainly, Layer 4 and the deep portion of Layer 3) of A1 and their impulsion immediately elicits action potentials in pyramidal neurons located there. The narrow frequency tuning of these neurons results in a relatively tonotopic organization of A1 (Formisano et al., 2003), providing specific frequency information and thus contributing to the perception of pitch and timbre (the “content” of a melody). In contrast, neurons located in various parts of the MGB (mostly in its dorsal division) that target at superficial Layers 1 and 2 of A1 and the belt, are more broadly tuned and deliver non-specific information. Activating apical dendrites of the pyramidal cells, they do not directly result in their firing, but rather regulate the firing threshold by “warming up” pyramidal neurons according to the basic rhythm (or the metrical “context”) of a musical phrase. The high-frequency content is therefore synchronized with the low-frequency context in such a way that responses “driven” by events associated with contextual accents are amplified, while the responses that occur out of beat are weakened. The context is therefore created by a modulatory input, and the content by a “driving” input of the auditory cortex (Musacchia et al., 2014).
As regards pitch perception, Rauschecker (1997, 1999) and Rauschecker et al. (1997) was probably the first who demonstrated, in macaque monkey experiments, the independence of the processing of pure tones and chords. Since the primary auditory cortex (BA 41) and the belt (A42) receive largely independent input, the tonotopic structure that is typical for the superior colliculus and A1 is basically lost in the belt and even more so in the parabelt. Pure tones are therefore the least effective auditory stimulation to elicit neuronal responses in these areas (Rauschecker, 1997), which may have implications for their use in DoC. In contrast, the cells of the belt strongly respond to complex sounds and frequency-modulated sweeps, indicating the non-reductive processing of complex sounds that builds the basis for the perception of pitch modulation independently of intensity (Rauschecker, 1999). The same research team further hypothesized that the auditory system, like its visual counterpart, entails two different pathways to higher-order cortical areas, designed for processing spatial and temporal information, the “where” and “when” subsystems (Romanski et al., 1999). This hypothesis, however, remains under debate (e.g., Griffiths, 2001). Instead, another model proposed that auditory pathways could be segregated by their modes of auditory processing, such that a dorsal pathway extracts the message or melody from sound, whereas the ventral pathway identifies the speaker or instrument by its timbre (Zatorre et al., 2002b).
The independence of single frequency and harmonic processing is also critically important for the separation of auditory objects (e.g., Yost, 2007), because objects can be conceived as particular correlations of several frequency bands (Nelken et al., 2014). Moreover, the non-linear analysis of physical stimuli in the cochlea can result in internally generated new harmonics produced by the auditory system itself (Pickles, 1988). These facts demonstrate the inadequacy of the idea that the primary auditory processes sound in a Fourier-like manner.
Notably, the relation between the three auditory cortex regions (i.e., A1, A2, and A3) changed very much in the course of human evolution. While the primary auditory cortex in humans is slightly smaller than in macaques, the human belt and parabelt areas are almost 10 times larger (Angulo-Perkins and Concha, 2014). Another interesting fact is that the origin of auditory cortical input is mostly top-down. This is true even for A1, as only 23% of neurons projecting to A1 are of purely acoustic subcortical (i.e., thalamic) origin, while 66% are cortical neurons, most of them being localized at higher levels of the auditory system. Therefore, one cannot speak about feature analysis at the A1 level. Rather, stimulus representation in the auditory cortex is task-specific, i.e., “spatio-temporal activation patterns of neuronal ensembles in AC, passively generated by a given stimulus and basically reflecting all features of a stimulus, can be modified according to the context and the procedural and cognitive demands of a listening task, i.e., also reflect semantic aspects of a stimulus” (Scheich et al., 2007, p. 214).
As receptive fields of cortical neurons can flexibly adjust to the auditory task, the tonotopy of A1 should not be overvalued. Many A1 neurons in most investigated mammalian species respond to several frequencies (for primates, see, e.g., Sadagopan and Wang, 2009), and even those with a single-frequency peak do not respond to individual components of harmonic tones that are outside of its tone-derived frequency response area (Wang and Walker, 2012). This suggests that frequency-driven responses can be harmonically modulated. While the relatively few axons from the geniculate nucleus of the thalamus frequently end at cell bodies or basal dendrites, the big portion of the top-down cortical input comes to apical dendrites, thus creating a “context” modulating responsivity to specific factors. The relation between top-down and bottom-up input in higher-order areas is even more shifted toward the former. Together, these data support the view that the purpose of the auditory cortex in higher animals (mainly investigated in monkeys) is not only sensory analysis but also the adjustment to the auditory environment and identification of auditory objects (Yost, 2007; Reybrouck and Brattico, 2015).
Human Studies
As cellular mechanisms of music perception at subcortical and cortical levels cannot be studied directly in humans, the neural characteristics of music processing have mostly been investigated using event-related brain responses measured with the EEG and the magnetoencephalogram (MEG), or by assessing the blood oxygenation (BOLD) response to auditory stimulation with functional magnetic resonance imaging (fMRI). The latter, for example, revealed that optimized auditory processing of rhythm and frequency is associated with a relative hemispheric advantage, with the left auditory cortex being more sensitive to temporal characteristics of auditory cues (i.e., more prevalent in speech production) and the right auditory cortex being better for decoding pitch and harmony content of acoustic stimuli, which is emphasized in music (Zatorre et al., 2002a). Given the huge difference in the methodological precision (each EEG, MEG, or fMRI recording encompasses the activity of many thousands of neurons, compared with single cell recordings in animals), however, one may even be surprised how similar are the conclusions of human and animal experiments.
The arrival of auditory input at the cortex in humans is manifested in ERPs by the obligatory (exogenous) primary complex P1–N1 with the latencies of about 50 ms and 100–120 ms for P1 and N1, respectively. Processing of stimulus deviation is reflected in an endogenous ERP component mismatch negativity (MMN: Näätänen, 1995) that attains its peak around 200 ms post stimulus. MEG data show that at least a large portion of the MMN is generated in the auditory cortex. An important property of the MMN is that its generators do not require active attention. Even though attention to stimuli can increase MMN amplitude (e.g., Erlbeck et al., 2014), other ERP components (which can mask the MMN) are increased to a much larger extent; therefore, it is practically better to record the MMN in a condition in which the subject’s attention is caught by some other activity such as reading a book or looking at a movie. Higher-order music processing can be manifested in an early right anterior negativity (ERAN), an ERP component of frontal origin (for review, see Koelsch, 2014), or in two late components, N400 and P600, with the latencies of about 400 and 600 ms, respectively. These components, however, are much more attention-dependent than the MMN.
For a long time, the MMN was studied in response to rather simple stimulus deviations such as deviations in pitch (e.g., 800 Hz–800 Hz–800 Hz–800 Hz–600 Hz), intensity (e.g., 80 dB–80 dB–80 dB–80 dB–65 dB), or tone duration (e.g., 50 ms–50 ms–50 ms–50 ms–30 ms). Later studies showed, however, that the MMN also responds to much more complex pattern changes in the auditory stream (e.g., Tervaniemi et al., 1994). Thus, the repetition of a short sequence like AAB results in an MMN after omission of the last tone (AA_), reversal (ABA), or even repetition of the same tone (AAA). Moreover, MMN mechanisms are also sensitive to some level of abstraction. This is shown in an experiment in which standard (repeated) stimuli were ascendant pairs combining five different chords (AB, CD, AC, BE, etc.). Two kinds of rare deviants were either descendent pairs (DA, CB, etc.), or repetitions (AA, DD, etc.). Both kinds of deviants elicited a strong MMN (Tervaniemi et al., 2001).
Dipole localization using MEG indicates that the generator of the MMN to chords in the STG is located more medial than the MMN generator for sine tones. However, stimulus complexity is not the only factor affecting the generator structures, as demonstrated by experiments in which the magnetic counterpart of the electric MMN was compared between phoneme change and chord change of the same acoustic complexity. The source of the “musical” MMN was located superior to the source of the “phonetic” MMN. Moreover, the former was lateralized to the right side, while the latter was symmetrical. Importantly, the generator of the component P1 was identical for all stimuli of comparable complexity regardless of their origin. Apparently the mechanism of the MMN is the first processing stage at which music-specific analysis of auditory stimuli begins (Angulo-Perkins and Concha, 2014).
In support of animal data presented above, indicating a strong independence of processing of harmonic tones compared to that of single sine frequencies, MMN data indicate that also in humans pitch deviations of chords result in a larger MMN than comparable deviations of pure tones (Tervaniemi et al., 2000). By successfully replicating this MMN paradigm in a large sample of DoC patients, our group demonstrated that the MMN to harmonic tones not only led to a larger amplitude as shown before but also to a higher frequency of occurrence than the MMN to sine tones (Kotchoubey et al., 2003). About a half of the patients who did not have an MMN to simple sine tones exhibited, however, an MMN to harmonic tones. The MMN seems to be present in about 30–60% of all patients with acute or chronic DoC (Kotchoubey, 2015). In acute coma it belongs to the most reliable predictors of further awakening (meta-analytic review of Daltrozzo et al., 2007), and there is also evidence of its predictive meaning in chronic DoC (Kotchoubey et al., 2005). In order to evaluate the effectiveness of music therapy in chronic DoC, the habitual assessment of MMN to complex tones could help developing a potential outcome predictor.
Other ERP components, later than the MMN, occur with a lower frequency in DoC, but confirm that the auditory system of many DoC patients remains flexible enough to process stimuli of very high complexity (Kotchoubey, 2015). Thus the attention-dependent component P3 in these patients responds, like the MMN, much better to harmonic stimuli than to sine tones (Kotchoubey et al., 2001). ERP responses to complex violations in rhythmic sound sequences have recently been demonstrated in 10 of 24 patients in deep post-anoxic coma who were additionally sedated (Tzovara et al., 2015).
Key messages:
-
•
Auditory processing is related to one of the most basic processes underlying all higher forms of life, i.e., the processing of environmental events in their temporal sequence.
-
•
The auditory cortex entails specialized regions for the processing of complex sounds and their components. Auditory scene analysis and the identification of auditory objects is an important task of the auditory cortex, which can result in clinically important dissociations between disorders that entail the processing of simpler versus more complex sounds.
-
•
Consistent responses to chords and to changes in harmonic patterns have also been observed in DoC cases where cortical responses to sine tones could not be recorded. We therefore suggest complex sounds for auditory stimulation in DoC as a rule.
-
•
Non-responsiveness to simple sounds is no reason to withdraw from musical therapy!
Higher-Level Auditory Processing
Segregation and Integration
Beyond basic aspects of sound processing, music perception represent a highly complex process that involves the segregation and integration of various different acoustic elements such as melody, harmony, pitch, rhythm, and timbre, which engage networks that are not only implicated in auditory but also in syntactic and visual processing (Schmithorst, 2005). In fact, both music and language engage partly overlapping (Liegeois-Chauvel et al., 1998; Buchsbaum et al., 2001; Koelsch and Siebel, 2005; Koelsch, 2006; Chang et al., 2010; Schön et al., 2010; Patel, 2011) as well as domain-specific subcortical and cortical structures (Belin et al., 2000; Tervaniemi et al., 2001; Zatorre et al., 2002a; Zatorre and Gandour, 2008).
Sound perception first requires the extraction of auditory features in the brain stem, the thalamus, and the auditory cortex (Koelsch and Siebel, 2005), leading to auditory percepts of pitch-height and pitch-chroma, rhythm, and intensity. However, the lower-level frequencies related to the temporal organization of music may also be processed independently from melodic intervals (Peretz and Zatorre, 2005), engaging additionally pre- and supplementary motor areas, the basal ganglia, and the cerebellum (Grahn and Brett, 2007; Thaut et al., 2009). This integration of sequentially ordered acoustic elements on longer time-scales is a highly demanding task that requires the structuring (e.g., separation or grouping) of musical elements, leading to a cognitive representation of acoustic objects based on Gestalt principles (Darwin, 2008; Ono et al., 2015). The cognitive involvement of musical pattern processing is evident from the joint activation of auditory association cortices with pre-frontal regions in the brain (Griffiths, 2001).
All basic forms of learning, some of which presented even in the simplest animals like worms, necessarily involve the ability to perceive events in their temporal order. Thus habituation results from perceiving one and the same stimulus as repeating; classical (Pavlovian) conditioning is based on the perception that one stimulus (CS) consistently precedes another one (UCS); and so on. The perception of sequential events is essential to all higher forms of life, because it allows for the timely preparation of appropriate responses. The steady anticipation of consecutively presented information units therefore relates music to one of the most fundamental necessities of life, the predictability of events in their temporal succession (e.g., Francois and Schön, 2014; Wang, 2015). Events that are out of rhythm are unpredictable.
The sequential ordering of individual pitches also leads to the perception of melody, whereas their vertical ordering leads to the perception of harmony. To achieve perceptual coherence, a rule-based hierarchical organization of acoustic inputs is therefore elemental for determining how tones may be combined to form chords, how chords may be combined to form harmonic progressions, and how they are all united within a metric framework. This process of hierarchical structuring and temporal ordering of acoustic objects is indeed a shared feature in the syntactic organization of both music and speech.
Musical Syntax and Semantics
Syntax in music (just as in language), “refers to the principles governing the combination of discrete structural elements into sequences” (Patel, 2008, p. 241), with independent (yet interrelated) principles for melody, harmony, and rhythm. Musical syntax has been most thoroughly investigated with respect to harmony (e.g., Koelsch, 2012), as syntactic perception of harmonic dissonance and consonance depends crucially on the functional relationships of preceding and subsequent chords (or tones). As outlined above, these percepts build on expectancies based on previously acquired long-term knowledge and thus trigger distinct responses in the brain when they are violated.
An early study with musicians by Janata (1995) demonstrated that the violation of expectancy in the final chord of a chord sequence elicits larger P3 peaks as a function of the degree of violation, thus reflecting both attentional (P3a; 310 ms latency) and decisional (P3b; 450 ms latency) processes. Another study (Patel et al., 1998), reported that incongruences in both language and music syntactic would elicit a parieto-temporal P600, which had been associated with language processing, suggesting that this ERP component reflects more general processes of structural acoustic integration across domains. Likewise, some kinds of syntactic violations in language may elicit a specific negative component in the ERP with a latency about 200–300 ms and a maximum over the left frontal cortex, the so-called early left anterior negativity (ELAN). Beginning with a first study by Koelsch et al. (2000), a comparable syntactic violation in music was found to result in a quite similar ERP component over the right frontal cortex: the ERAN (Koelsch et al., 2001; Koelsch and Jentschke, 2010; Koelsch, 2012). Accordingly, the ERAN reflected “a disruption of musical structure building, the violation of a local prediction based on musical expectancy formation, and acoustic deviance” (Koelsch, 2012, p. 111). A later negative component around 500–550 ms (N5) was also observed over frontal regions following the ERAN, but was rather associated with musical meaning (Poulin-Charronnat et al., 2006, see below). Other, simpler kinds of syntactic violations resulted mainly in a late positive parietal complex rather than an early frontal negativity for both language (e.g., Osterhout, 1995) and music (e.g., Besson and Faïta, 1995), although studies on melodic syntactic violations also reported a frontal ERP response with a slope emerging around 100 ms and peaking around 120–180 ms that resembled the ERAN in harmonic violation paradigms (Brattico et al., 2006; Koelsch and Jentschke, 2010).
A conceptual similarity between music and speech perception is also reflected in the dynamics of the N400 ERP component (e.g., Patel, 2003; Kotchoubey, 2006; Daltrozzo and Schön, 2009a,b). Like the N5, the N400 has been attributed to musical meaning rather than syntax, contributing to the subjective interpretation of musical information, which involves affective processing. Koelsch (2012) used the term musical semantics to account for the different dimensions of extra-musical, intra-musical, and musicogenic meaning. Extra-musical meaning can be derived from musical sign qualities by making reference to the extra-musical world, such as the imitation of naturally occurring sounds (e.g., the river Rhine in Wagner’s “Rheingold” prelude), the psychological state of a protagonist (e.g., in the pranks of Richard Strauss’s “Till Eulenspiegel”), or arbitrary symbolic associations (e.g., national anthems). Intra-musical meaning in turn refers to the interpretation of structural relations between musical elements, whereas musicogenic meaning describes the experience of emotional, physical, or personal effects of music, which are evoked within the listener.
Several studies have demonstrated that the representation of extra-musical meaning can be related to the N400, which is thought to reflect to the processing of meaning, for example when the content of target words in a semantic priming paradigm is meaningfully unrelated to the content of preceding musical excerpts (Koelsch et al., 2004). The N400 seems to be generated in the posterior temporal lobe, in close vicinity to regions that also process speech related semantics (Lau et al., 2008) and non-verbal vocalization (Belin et al., 2000; Kriegstein and Giraud, 2004). The notion that the N400 processes meaning from musical information has been confirmed in recent studies (Goerlich et al., 2011), where the N400 was triggered when the affective valence of word primes did not match the valence of musical or prosodic stimuli. Intra-musical meaning, in contrast, seems to be reflected by the N500 (or N5). As indicated above, the N5 follows the ERAN elicited by the perception of harmonic incongruence. However, the N5 does not just represent a function of incongruity in harmonic progressions but is rather modulated by the harmonic integration and contextual information in music that is not related to an extra-musical reference (Steinbeis and Koelsch, 2008). Lastly, musicogenic meaning may emerge from emotions evoked by the musical stimulus, which can also be associated with corresponding personal memories (see music evoked emotions below).
Although we do not know about any direct effects of music listening on language comprehension or other verbal functions in DoC patients, such effects have been demonstrated in other clinical populations. Music training has been used in language disorders (Daltrozzo et al., 2013) and the rehabilitation of aphasia patients, which led to increased structural integrity of white-matter tracts between fronto-temporal regions involved in language processing (Schlaug et al., 2010; Marchina et al., 2011). Also perceptual treatments have shown strong effects, including increased gray-matter volume after passive musical and verbal stimulation in stroke patients (Särkämö et al., 2014a). In this study, long-term changes (6-month follow up) were found in the orbitofrontal cortex, anterior cingulate cortex, ventral striatum, fusiform gyrus, insula, and superior frontal gyrus (SFG) areas after patients listened regularly to their preferred music. Changes in frontolimbic cortex moreover correlated with the improvement of verbal memory, speech and focused attention. Thus the SFG and the anterior cingulated cortex (ACC) appear to be important structures that mediate between music processing and cognition.
Key messages:
-
•
Music and language both work with temporal features of stimulation. The two domains are implemented in partially overlapping, partially analogous morphological and functional mechanisms. Successful therapeutic interventions in one of these domains can result in significant improvement in the other one as well.
-
•
We propose that the distinct ERP components associated with the neural difference in the processing of musical syntax and musical semantics (i.e., extra-musical, intra-musical, and musicogenic meaning) may prove useful for the detection of disparate cognitive processes during music perception in DoC.
Implications for Multisensory Stimulation
Although both music and speech perception are based on auditory scene analysis (Janata, 2014), perceptual modalities should not be treated as independent entities but rather considered in the context of simultaneous multisensory integration, which explains why somatosensory and visual feedback can significantly modulate auditory perception (Wu et al., 2014). In the same vein, the close connection between production and perception in music and speech tightly links auditory and somatosensory modalities. During production, we compare acoustic feedback with the intended sound to adjust motor commands, yet we simultaneously develop corresponding somatosensory representations related to inputs from cutaneous and muscle receptors (Ito and Ostry, 2010; Simonyan and Horwitz, 2011). Based on Hebbian learning mechanisms (Hebb, 1949), this simultaneous co-activation of perceptual and motor systems leads to the phenomenon of cross-modal plasticity, which manifests as mutual facilitation of neural activity and explains altered perception in one modality when the expected sensory feedback of another modality is not in register (Gick and Derrick, 2009). For example, stretching the facial skin during listening to words alters the subjective perception of auditory feedback (Ito et al., 2009). Conversely, the manipulation of auditory feedback during speech can also alter somatosensory orofacial perception (Ito and Ostry, 2012). Champoux et al. (2011) demonstrated that amplitude modulation of auditory feedback during speech production can even induce distinct laryngeal and labial sensations that are not a mechanic consequence of the motor task, whereas Schürmann et al. (2006) showed that vibrotactile stimulation helps auditory perception in both healthy and hearing impaired subjects.
As a rule, mutual perceptually facilitating effects are stronger when co-activation has been learned over a longer period, as shown in the example of trained musicians. In a study from Christo Pantev’s lab (Schulz et al., 2003), professional trumpet players and non-musicians received auditory (i.e., trumpet sound) and somatosensory (i.e., lip) stimulation, presented either alone or in combination. Results showed that the combined stimulation yielded significantly larger responses in MEG source waveforms in musicians than in non-musicians, suggesting that the stronger experience in task-dependent co-stimulation of somatosensory and auditory feedback facilitates their cross-modal functional processing in musicians (Pantev et al., 2003). Similar effects have been described for audio-visual processing of music, corresponding to an increased N400 response when the two modalities were incongruent. Studies in the speech domain furthermore suggest that accurate corrective vocal-motor responses to somatosensory and auditory perturbation exist in both modalities (Lametti et al., 2012), although somatosensory feedback seems to gain importance as experience increases in trained singers (Kleber et al., 2010, 2013).
The logic behind cross-modal plasticity in the context of DoC is related to the idea that simultaneously stimulating functionally corresponding auditory and somatosensory modalities could potentially boost (i.e., facilitate) the neural responses in both systems. Although there is no large-size statistical data about the frequency of somatosensory disorders in DoC, somatosensory EP (SSEP) are standardly recorded in most hospitals for neurological rehabilitation. In fact, the functionality of somatosensory pathways has been successfully used to predict the long-term outcome of these disorders (de Tommaso et al., 2015; Li et al., 2015). Therefore, we suggest that the somatosensory system can be explored by means of neurophysiological techniques.
The idea of using more than one sensory modality for interacting or stimulating DoC patients is not new. In fact, “basal” multisensory (i.e., visual, auditory, tactile, gustatory, and olfactory) stimulation has been used as a therapeutic intervention and represents a standard procedure in many German intensive care and early rehabilitation facilities (Menke, 2006). However, multisensory stimulation in DoC patients is not standardized and the therapeutic use of multisensory stimulation has not been well documented (Rollnik and Altenmüller, 2014). Moreover, the concurrent stimulation of individual sensory modalities may be functionally unrelated and thus not trigger a facilitating effect, which could account for the lack of reliable evidence to support the effectiveness of multisensory stimulation programs in patients in coma or the VS (Lombardi et al., 2002). We therefore propose to apply multisensory stimulation only in a functionally related way, for example with concurrent orofacial-tactile and corresponding auditory stimulation associated with song or speech production. This might increase chances to enhance the potential of multisensory stimulation for the detection of diagnostic ERP components in DoC and/or to facilitate therapeutic processes.
A similar line of thought follows the tight coupling between perception and action when we synchronize our body movements to an external rhythm even without being aware of it. Timing is extremely important for movement, which can be facilitated by music perception via activation of distinct cerebellar-cortical networks involved with movements control (Thaut et al., 2009). Indeed rhythm production and perception engages similar brain regions including the supplementary motor area (i.e., involved in motor sequencing), the cerebellum (i.e., involved in timing), and the pre-motor cortex (Chen et al., 2008a). In musicians, activity in pre-motor cortex has been linked to the rhythm difficulty, suggesting that also working memory contributes to the organization and decomposition of acoustic temporal structures (Chen et al., 2008b). The involvement of pre-frontal and temporal regions during auditory rhythm stimulation has been confirmed with both electrophysiological (direct current; Kuck et al., 2003) and PET data (Janata, 2014). The latter study found furthermore common activation patterns for rhythm, meter, and tempo within frontal, pre-frontal, temporal, cingulate, parietal, and cerebellar regions. Not surprisingly, auditory rhythmic stimulation has been successfully used to facilitate motor acts in both healthy subjects and in neural rehabilitation (Molinari et al., 2003; Chen et al., 2006), since musical rhythms activate a network that is otherwise engaged by motor production and that can be distinguished from melodic processing (Bengtsson and Ullen, 2005).
Key messages:
-
•
Multisensory stimulation in DoC is suggested to take into account the potentially facilitating effects of cross-modal plasticity as a result of functionally corresponding processes during production and perception in well-trained motor tasks (e.g., speech or song).
-
•
The strong link between musical rhythm and motor behavior might be useful for testing motor related responses to rhythmic auditory stimulation as a complementary approach to the testing of syntactic (melodic/harmonic) processing in the brain of DoC patients.
Therapeutic Effects of Musical Stimulation
Cognitive Effects
Music production is a uniquely rich multisensory experience. The development of musical skills enhances not only the cognitive, sensorimotor, and perceptual abilities but also changes corresponding motor, sensory, and multimodal representations in the brain (Herholz and Zatorre, 2012). Although these changes are particularly apparent in trained musicians, available clinical studies indicate that musical stimulation and musical training can also have beneficial effects for the rehabilitation of higher-order cognitive functions, e.g., on autobiographical memory in Alzheimer’s patients (Irish et al., 2006; El Haj et al., 2012; García et al., 2012) and other kinds of dementia (Foster and Valentine, 2001). Irish et al. (2006) found that participants with mild Alzheimer’s disease were recalling significantly more life events when listening to Vivaldi’s “The Spring” compared to a silence condition, whereas the same effect was even higher with patients’ self-chosen music (El Haj et al., 2012).
Possible mechanisms underlying the effect of musical stimulation on cognitive functions in patients with severe neurological disorders may be associated with neuroplasticity and neurogenesis in brain regions that are activated by music. Neuroplasticity may result in healthy brain areas compensating the disordered functions of injured areas and/or may increase the rate of neurogenesis and gray matter volume. The effect of music on neuroplasticity has been demonstrated in several studies (Stewart et al., 2003; Rickard et al., 2005; Pantev and Herholz, 2011; Herholz and Zatorre, 2012; Särkämö and Soto, 2012) and appears to be, at least partly, mediated by the production of the neurotrophin BDNF (brain-derived neurotrophic factor) in the hippocampus, which is increased in music-rich environments (Angelucci et al., 2007; Marzban et al., 2011) and involved in processes of memory formation and learning.
Another explanation for the effects of music on cognition involves the ACC and its product, the frontal midline theta rhythm, which is crucially important for emotional and cognitive processes (Bush et al., 2000). The frontal midline theta (fm-theta) is involved in working memory (Klimesch, 1997, 1999; Doppelmayr et al., 2000), episodic memory (Klimesch, 1997; von Stein and Sarnthein, 2000), emotional processing (Aftanas and Golocheikine, 2001), cognitive control (Gruendler et al., 2011; Cavanagh and Frank, 2014), and executive functioning (Miyake et al., 2000; Fisk and Sharp, 2004). In healthy subjects, ACC activation was found to correlate with pleasure responses to music (Blood and Zatorre, 2001; Baumgartner et al., 2006). Accordingly, the spectral power of the fm-theta is increased during listening preferred pleasant music in contrast to unpleasant one (Sammler et al., 2007). Interestingly, the only study that investigated the cognitive correlates of music perception in DoC patients replicated this effect (O’Kelly et al., 2013). In this study, the information about personal music preferences in patients was obtained from their close relatives, while for control participants this information was obtained directly. Listening to preferred songs has increased the power of the fm-theta in both groups.
To avoid superficial optimism, it should be said that the effects of music on cognition could critically depend on the amount of training. Probably in this case the rule of “the more the better” works. Särkämö et al. (2014b) attained significant effects of musical stimulation after 10 weeks of intensive training in 29 patients with dementia, which included not only passive listening to music but also conversations in small groups about the music-evoked emotions, thoughts and memories. Moreover, participants also performed homework assignments dedicated to listening their favorite music, while their caregivers organized the music intervention sessions. Beneficial effects at the 9-month follow-up involved a positive correlation between participants’ mood, working memory performance and the frequency of music sessions. Together, these findings indicate that music therapy and stimulation can have significant effects on cognitive and emotional aspects, whereas the intensity of music intervention can play a key role for producing long-lasting and stable structural and functional changes in the brain.
Key messages:
-
•
Passive listening to preferred music over longer time-periods might particularly enhance processes related to memory and cognition in DoC.
-
•
Changes in fm-theta amplitudes during listening pleasant music could indicate emotional and cognitive responses.
-
•
More intensive music therapy interventions might provide better therapeutic results.
Emotional Effects
The putative association between music stimulation and cognitive improvement in DoC patients might also be mediated by positive music-evoked emotions. These positive emotions can be associated with activation of the reward system of the brain and related dopamine release. At the same time, dopamine levels can be directly related to working memory, cognitive control, and attention (Nieoullon, 2002; Cools and D’Esposito, 2011). Pharmacological studies have shown that the increase of dopamine level improves performance in working memory and executive functions in both healthy subjects (Mehta and Riedel, 2006) and patients with traumatic brain injuries (Bales et al., 2009).
Music is a potent stimulator for a wide range of basic and complex emotions associated with changes in physiological arousal, subjective feeling, and motor expression (Koelsch et al., 2006, 2008; Grewe et al., 2007a,b). The reward value of music is moreover reflected in the classic reward circuitry of the brain (Zatorre, 2015), which entails dopaminergic mesolimbic pathways including the ventral tegmental area, the striatum (dorsal: nucleus accumbens; ventral: the head of the caudate nucleus), the ventromedial and orbitofrontal cortices, the amygdala, and the insula (Berridge and Kringelbach, 2013). These regions are traditionally associated with primary and secondary rewards, yet pleasurable music is also able to activate this system (Koelsch, 2014). For example, dopamine release in response to music stimulation accompanied by pleasurable emotional reactions has been reported in a study by Salimpoor et al. (2011).
The positive effects of music on emotional states (and correspondingly cognitive processing) may be related to acoustic features of music but have also been attributed to familiarity, as the subjective liking of music can be directly correlated with the familiarity of the piece (Peretz et al., 1998; Schellenberg et al., 2008). Listening to familiar versus unfamiliar music yields higher activity in the limbic system and the orbitofrontal cortex (Pereira et al., 2011), which is in accord with data demonstrating a correlation between music-elicited positive emotions and orbitofrontal activation (Menon and Levitin, 2005).
Familiarity implies the anticipation of a pleasurable musical passage, in line with the difference between anticipation and actual experience that has been found by Salimpoor et al. (2011). That is, activation in dopaminergic areas peaked in the dorsal striatum seconds before the maximum pleasure was experienced, related to the number of chill experiences, whereas activation in the ventral striatum was associated with the emotional intensity at the moment of the peak pleasure experience. Yet also novel (i.e., unfamiliar) pieces of music can trigger responses in the dorsal striatum when their reward values are high (Salimpoor et al., 2013), which was taken as further evidence that temporal (i.e., musical structural) predictions may also be involved in the emotional experience of music. On the other hand, striatal connectivity with auditory cortex that increased as a function of reward value suggests that previous memory formation could affect expectations related to emotional experience in music. Individual differences in memory formation could therefore modulate both the anticipation of intra-musical meaning (i.e., based on statistical leaning of functional relationships between consecutive musical elements) and the allocation of personal “musicogenic” meaning to a musical sequence (i.e., based on personal relevance). In addition, episodic memory and musical valence are closely interrelated, such that musical pieces with a positive association are also better remembered (Eschrich et al., 2008).
Särkämö and Soto (2012) suggested that the effects of music on working memory and attention performance, which they observed in stroke patients, were partly mediated by dopamine increase related to positive emotion. This idea is supported by the fact that depression and confusion were inversely correlated with verbal memory performance after music therapy. In another study including patients with visual neglect, the same research team (Soto et al., 2009) showed that listening to pleasant music enhanced awareness to contralesional targets.
Interestingly, brain injuries leading to DoC are often related to widespread damage of dopaminergic system axons and a reduced level of dopamine in the cerebrospinal fluid (Meythaler et al., 2002). There is even a hypothesis that DoC are mainly caused by destruction in the dopamine system (Hayashi et al., 2004), whereas restoration of the normal regulation of dopamine level has a positive effect on cognitive recovery in DoC patients. In several studies, levodopa (a precursor of dopamine) not only improved motor functions of DoC patients but also resulted in positive changes of their consciousness (Haig and Ruess, 1990; Matsuda et al., 2003, 2005; Krimchansky et al., 2004; Ugoya and Akinyemi, 2010). Moreover, the well-known placebo-controlled randomized study of traumatic DoC patients (Giacino et al., 2012) revealed a significant effect of the indirect dopamine agonist Amantadine.
A recent study (Castro et al., 2015) demonstrated the aforementioned relationship between music, familiarity, and cognition in a sample of DoC patients. The study included the presentation of the subjects’ own first name (SON) as a deviant stimulus among other first names as standard stimuli. Listening to excerpts from the patient’s preferred music increased the amplitude of ERP components N2 and/or P3 to SON in seven of 13 patients. These seven patients showed a favorable outcome after 6 months following the experiment. The other six patients who did not show any response to the SON remained in the same state or died 6 months later. The existence of music-evoked emotions in DoC might therefore even have a predictive value in DoC and perhaps also the potential to re-activate memory traces associated with musical emotions.
Key messages:
-
•
DoC can be related to damage of the dopaminergic system. Emotionally pleasurable music in turn can activate the dopaminergic system by inducing changes in the limbic system associated with the reward value of music, which could have beneficial effects on consciousness in DoC.
-
•
Music therapy and musically induced positive intra-musical and musicogenic emotions might furthermore stimulate cognitive processes and personal memory activation.
-
•
A hypothesis worth testing is that ERP components, such as the N2 and/or P3 in response to preferred music as well as changes in time-frequency theta amplitudes over frontal midline regions in the EEG, might predict the outcome of DoC in response to emotionally pleasurable music.
Stress Reduction
Influence of stress and the related cortisol level on cognitive functions was shown in numerous studies with healthy participants, where the increased level of cortisol had a negative impact on executive functions, declarative memory, working memory, and language comprehension (McEwen and Sapolsky, 1995; Lupien et al., 1997; Lee et al., 2007). Factors mediating the negative impact of chronic stress are supposed to be dendritic atrophy and synaptic loss in the hippocampus and the prefrontal cortex as well as the decrease of the rate of neurogenesis in the hippocampus (Radley and Morrison, 2005). Chronic stress can also cause changes in the dopaminergic system, reducing dopamine levels in the prefrontal cortex (Mizoguchi et al., 2002), and negatively affect the immune system (Segerstrom and Miller, 2004).
Several studies have emphasized the stress-reducing value of daily music listening, with positive effects being observed on subjective, physiological, and endocrinological parameters (Linnemann et al., 2015). Even short-term exposure to musical stimulation consistently decreases cortisol levels of healthy subjects (for systematic review, see Fancourt et al., 2014) and this effect was particularly large when participants had self-selected the music (e.g., in patients undergoing surgery; Leardi et al., 2007). Moreover, there is also evidence for positive effects of music on the immune system, as indicated by several parameters at cytokine, leukocytes and immunoglobulins levels (Fancourt et al., 2014).
Convincing evidence suggests that traumatic brain injury, stroke, and other frequent neuropathological factors can induce stress reactions in both short-term (Franceschini et al., 2001; Prasanna et al., 2015) and long-term (Sojka et al., 2006; Marina et al., 2015) perspectives. These findings suggest that DoC of traumatic or non-traumatic etiology may also be accompanied by chronic stress, although the available data are inconsistent. While Vogel et al. (1990) obtained an increased level of cortisol in VS using 24 h monitoring, Munno et al. (1998) found a lower cortisol level in VS patients and in a group of exit-VS patients who had been conscious for more than 6 months in comparison with normal parameters. Another study of VS patients in a long-term-care facility (mean disease duration 6.2 ± 5.1 years) did not reveal any significant differences from a control group (Oppl et al., 2014). A case study reported a VS patient whose level of consciousness improved after injections of autologous activated immune cells (Fellerhoff et al., 2012).
Key message:
-
•
Music has the potential to enhance cognitive functions in DoC through a decrease of stress and a related drop of cortisol level together with activation of the immune system.
Conclusion
Direct evidence for positive effects of music therapy interventions on cognitive functions in DoC is still very scarce. In this paper we summarized a theoretical justification for the idea that properly organized music stimulation programs can indeed lead to the suggested beneficial effects. At the low-level organization of the (primary and secondary) sensory cortical areas, the auditory modality reveals its particular potential for presenting specific stimulation that combines sufficient complexity with the availability for severely brain-damaged patients. In this context, we strongly suggest the use of complex sounds rather than sine-tones in DoC. Cognitive mechanisms would capitalize the specific psychological and neurophysiological affinity between music and speech processing, based on the great similarity between these two domains of human culture. This entails the identification of auditory objects, which can result in clinically important dissociations between disorders based on the processing of musical syntax and meaning, which are reflected by changes in corresponding ERP components. The neuroplastic associations with music may furthermore lead to functional improvement of memory and attention beyond the language domain, whereas multisensory stimulation based on previously acquired cross-modal plasticity may facilitate electrophysiological responses as well as functional improvement. Moreover, musically stimulated rhythmic processes in the nervous system could serve as a starting point for rehabilitation. A completely different mechanism mediating the hypothesized positive effects of music in DoC runs through music-evoked emotions, which have the potential to activate the dopaminergic system and may thus lead to a suppression of the stress response system. The diagnostic value of musically evoked emotions includes ERP components such as the N2 and/or P3 as well as changes in time-frequency theta amplitudes over frontal midline regions in the EEG. However, more research is needed to address the ecological validity of these suggestions and thus to come to more conclusive results in this patients group, even though the organization and performance of such studies is highly demanding.
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Acknowledgment
We acknowledge support by the Deutsche Forschungsgemeinschaft (DFG) and the Open Access Publishing Fund of the University of Tübingen.
References
- Abrams D. A., Nicol T., Zecker S., Kraus N. (2011). A possible role for a paralemniscal auditory pathway in the coding of slow temporal information. Hear. Res. 272, 125–134. 10.1016/j.heares.2010.10.009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aftanas L. I., Golocheikine S. A. (2001). Human anterior and frontal midline theta and lower alpha reflect emotionally positive state and internalized attention: high-resolution EEG investigation of meditation. Neurosci. Lett. 310, 57–60. 10.1016/S0304-3940(01)02094-8 [DOI] [PubMed] [Google Scholar]
- Andrews K., Murphy L., Munday R., Littlewood C. (1996). Misdiagnosis of the vegetative state: retrospective study in a rehabilitation unit. BMJ 313, 13–16. 10.1136/bmj.313.7048.13 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Angelucci F., Fiore M., Ricci E., Padua L., Sabino A., Tonali P. A. (2007). Investigating the neurobiology of music: brain-derived neurotrophic factor modulation in the hippocampus of young adult mice. Behav. Pharmacol. 18, 491–496. 10.1097/FBP.0b013e3282d28f50 [DOI] [PubMed] [Google Scholar]
- Angulo-Perkins A., Concha L. (2014). “Music perception: information flow within the human auditory cortices,” in Neurobiology of Interval Timing, eds Merchant H., de Lafuente V. (New York: Springer Science+Business Media; ), 293–303. [DOI] [PubMed] [Google Scholar]
- Bales J. W., Wagner A. K., Kline A. E., Dixon C. E. (2009). Persistent cognitive dysfunction after traumatic brain injury: a dopamine hypothesis. Neurosci. Biobehav. Rev. 33, 981–1003. 10.1016/j.neubiorev.2009.03.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baumgartner T., Esslen M., Jäncke L. (2006). From emotion perception to emotion experience: emotions evoked by pictures and classical music. Int. J. Psychophysiol. 60, 34–43. 10.1016/j.ijpsycho.2005.04.007 [DOI] [PubMed] [Google Scholar]
- Bekinschtein T. A., Shalom D. E., Forcato C., Herrera M., Coleman M. R., Manes F. F., et al. (2009). Classical conditioning in the vegetative and minimally conscious state. Nat. Neurosci. 12, 1343–1349. 10.1038/nn.2391 [DOI] [PubMed] [Google Scholar]
- Belin P., Zatorre R. J., Lafaille P., Ahad P., Pike B. (2000). Voice-selective areas in human auditory cortex. Nature 403, 309–312. 10.1038/35002078 [DOI] [PubMed] [Google Scholar]
- Bengtsson S. L., Ullen F. (2005). Dissociation between melodic and rhythmic processing during piano performance from musical scores. Neuroimage 30, 272–284. 10.1016/j.neuroimage.2005.09.019 [DOI] [PubMed] [Google Scholar]
- Berridge K. C., Kringelbach M. L. (2013). Neuroscience of affect: brain mechanisms of pleasure and displeasure. Curr. Opin. Neurobiol. 23, 294–303. 10.1016/j.conb.2013.01.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Besson M., Faïta F. (1995). An event-related potential (ERP) study of musical expectancy: comparison of musicians with nonmusicians. J. Exp. Psychol. Hum. Percept. Perform. 21, 1278–1296. 10.1037/0096-1523.21.6.1278 [DOI] [Google Scholar]
- Blood A. J., Zatorre R. J. (2001). Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proc. Natl. Acad. Sci. U.S.A. 98, 11818–11823. 10.1073/pnas.191355898 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brattico E., Tervaniemi M., Näätänen R., Peretz I. (2006). Musical scale properties are automatically processed in the human auditory cortex. Brain Res. 1117, 162–174. 10.1016/j.brainres.2006.08.023 [DOI] [PubMed] [Google Scholar]
- Bruno M. A., Vanhaudenhuyse A., Thibaut A., Moonen G., Laureys S. (2011). From unresponsive wakefulness to minimally conscious PLUS and functional locked-in syndromes: recent advances in our understanding of disorders of consciousness. J. Neurol. 258, 1373–1384. 10.1007/s00415-011-6114-x [DOI] [PubMed] [Google Scholar]
- Buchsbaum B. R., Hickok G., Humphries C. (2001). Role of left posterior superior temporal gyrus in phonological processing for speech perception and production. Cogn. Sci. 25, 663–678. 10.1207/s15516709cog2505_2 [DOI] [Google Scholar]
- Bush G., Luu P., Posner M. I. (2000). Cognitive and emotional influences in anterior cingulate cortex. Trends Cogn. Sci. 4, 215–222. 10.1016/S1364-6613(00)01483-2 [DOI] [PubMed] [Google Scholar]
- Castro M., Tillmann B., Luaute J., Corneyllie A., Dailler F., Andre-Obadia N., et al. (2015). Boosting cognition with music in patients with disorders of consciousness. Neurorehabil. Neural Repair 29, 734–742. 10.1177/1545968314565464 [DOI] [PubMed] [Google Scholar]
- Cavanagh J. F., Frank M. J. (2014). Frontal theta as a mechanism for cognitive control. Trends Cogn. Sci. 18, 414–421. 10.1016/j.tics.2014.04.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Champoux F., Shiller D. M., Zatorre R. J. (2011). Feel what you say: an auditory effect on somatosensory perception. PLoS ONE 6:e22829. 10.1371/journal.pone.0022829 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chang E. F., Rieger J. W., Johnson K., Berger M. S., Barbaro N. M., Knight R. T. (2010). Categorical speech representation in human superior temporal gyrus. Nat. Neurosci. 13, 1428–1432. 10.1038/nn.2641 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen J. L., Penhune V. B., Zatorre R. J. (2008a). Listening to musical rhythms recruits motor regions of the brain. Cereb. Cortex 18, 2844–2854. 10.1093/cercor/bhn042 [DOI] [PubMed] [Google Scholar]
- Chen J. L., Penhune V. B., Zatorre R. J. (2008b). Moving on time: brain network for auditory-motor synchronization is modulated by rhythm complexity and musical training. J. Cogn. Neurosci. 20, 226–239. 10.1162/jocn.2008.20018 [DOI] [PubMed] [Google Scholar]
- Chen J. L., Zatorre R. J., Penhune V. B. (2006). Interactions between auditory and dorsal premotor cortex during synchronization to musical rhythms. Neuroimage 32, 1771–1781. 10.1016/j.neuroimage.2006.04.207 [DOI] [PubMed] [Google Scholar]
- Cools R., D’Esposito M. (2011). Inverted-U-shaped dopamine actions on human working memory and cognitive control. Biol. Psychiatry 69, e113–e125. 10.1016/j.biopsych.2011.03.028 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Daltrozzo J., Conway C. M., Smith G. N. (2013). Rehabilitating language disorders by improving sequential processing: a review. J. Macro Trends Health Med. 1, 41–57. [PMC free article] [PubMed] [Google Scholar]
- Daltrozzo J., Schön D. (2009a). Conceptual processing in music as revealed by N400 effects on words and musical targets. J. Cogn. Neurosci. 21, 1882–1892. 10.1162/jocn.2009.21113 [DOI] [PubMed] [Google Scholar]
- Daltrozzo J., Schön D. (2009b). Is conceptual processing in music automatic? An electrophysiological approach. Brain Res. 1270, 88–94. 10.1016/j.brainres.2009.03.019 [DOI] [PubMed] [Google Scholar]
- Daltrozzo J., Wioland N., Mutschler V., Kotchoubey B. (2007). Predicting coma and other low responsive patients outcome using event-related brain potentials: a meta-analysis. Clin. Neurophysiol. 118, 606–614. 10.1016/j.clinph.2006.11.019 [DOI] [PubMed] [Google Scholar]
- Darwin C. J. (2008). Listening to speech in the presence of other sounds. Philos. Trans. R. Soc. Lond. B Biol. Sci. 363, 1011–1021. 10.1098/rstb.2007.2156 [DOI] [PMC free article] [PubMed] [Google Scholar]
- de Tommaso M., Navarro J., Lanzillotti C., Ricci K., Buonocunto F., Livrea P., et al. (2015). Cortical responses to salient nociceptive and not nociceptive stimuli in vegetative and minimal conscious state. Front. Hum. Neurosci. 9:17. 10.3389/fnhum.2015.00017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Doppelmayr M., Klimesch W., Schwaiger J., Stadler W., Rohm D. (2000). The time locked theta response reflects interindividual differences in human memory performance. Neurosci. Lett. 278, 141–144. 10.1016/S0304-3940(99)00925-8 [DOI] [PubMed] [Google Scholar]
- El Haj M., Postal V., Allain P. (2012). Music enhances autobiographical memory in mild Alzheimer’s disease. Educ. Gerontol. 38, 30–41. 10.1080/03601277.2010.515897 [DOI] [Google Scholar]
- Erlbeck H., Kübler A., Kotchoubey B., Veser S. (2014). Task instructions modulate the attentional mode affecting the auditory MMN and the semantic N400. Front. Hum. Neurosci. 8:654. 10.3389/fnhum.2014.00654 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eschrich S., Munte T. F., Altenmuller E. O. (2008). Unforgettable film music: the role of emotion in episodic long-term memory for music. BMC Neurosci. 9:48. 10.1186/1471-2202-9-48 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fancourt D., Ockelford A., Belai A. (2014). The psychoneuroimmunological effects of music: a systematic review and a new model. Brain Behav. Immun. 36, 15–26. 10.1016/j.bbi.2013.10.014 [DOI] [PubMed] [Google Scholar]
- Fellerhoff B., Laumbacher B., Wank R. (2012). Responsiveness of a patient in a persistent vegetative state after a coma to weekly injections of autologous activated immune cells: a case report. J. Med. Case Rep. 6, 6. 10.1186/1752-1947-6-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fisk J. E., Sharp C. A. (2004). Age-related impairment in executive functioning: updating, inhibition, shifting, and access. J. Clin. Exp. Neuropsychol. 26, 874–890. 10.1080/13803390490510680 [DOI] [PubMed] [Google Scholar]
- Formisano E., Kim D. S., Di Salle F., van de Moortele P. F., Ugurbil K., Goebel R. (2003). Mirror-symmetric tonotopic maps in human primary auditory cortex. Neuron 40, 859–869. 10.1016/S0896-6273(03)00669-X [DOI] [PubMed] [Google Scholar]
- Formisano R., Vinicola V., Penta F., Matteis M., Brunelli S., Weckel J. W. (2001). Active music therapy in the rehabilitation of severe brain injured patients during coma recovery. Ann. Ist. Super. Sanita 37, 627–630. [PubMed] [Google Scholar]
- Foster N. A., Valentine E. R. (2001). The effect of auditory stimulation on autobiographical recall in dementia. Exp. Aging Res. 27, 215–228. 10.1080/036107301300208664 [DOI] [PubMed] [Google Scholar]
- Franceschini R., Tenconi G. L., Zoppoli F., Barreca T. (2001). Endocrine abnormalities and outcome of ischaemic stroke. Biomed. Pharmacother. 55, 458–465. 10.1016/S0753-3322(01)00086-5 [DOI] [PubMed] [Google Scholar]
- Francois C., Schön D. (2014). Neural sensitivity to statistical regularities as a fundamental biological process that underlies auditory learning: the role of musical practice. Hear. Res. 308, 122–128. 10.1016/j.heares.2013.08.018 [DOI] [PubMed] [Google Scholar]
- Galton C. J., Gomez-Anson B., Antoun N., Scheltens P., Patterson K., Graves M., et al. (2001). Temporal lobe rating scale: application to Alzheimer’s disease and frontotemporal dementia. J. Neurol. Neurosurg. Psychiatry 70, 165–173. 10.1136/jnnp.70.2.165 [DOI] [PMC free article] [PubMed] [Google Scholar]
- García J. M. M., Iodice R., Carro J., Sánchez J. A., Palmero F., Mateos A. M. (2012). Improvement of autobiographic memory recovery by means of sad music in Alzheimer’s disease type dementia. Aging Clin. Exp. Res. 24, 227–232. 10.3275/7874 [DOI] [PubMed] [Google Scholar]
- Giacino J. T., Whyte J., Bagiella E., Kalmar K., Childs N., Khademi A., et al. (2012). Placebo-controlled trial of amantadine for severe traumatic brain injury. N. Engl. J. Med. 366, 819–826. 10.1056/NEJMoa1102609 [DOI] [PubMed] [Google Scholar]
- Gick B., Derrick D. (2009). Aero-tactile integration in speech perception. Nature 462, 502–504. 10.1038/nature08572 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goerlich K. S., Witteman J., Aleman A., Martens S. (2011). Hearing feelings: affective categorization of music and speech in alexithymia, an ERP study. PLoS ONE 6:e19501. 10.1371/journal.pone.0019501 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grahn J. A., Brett M. (2007). Rhythm and beat perception in motor areas of the brain. J. Cogn. Neurosci. 19, 893–906. 10.1162/jocn.2007.19.5.893 [DOI] [PubMed] [Google Scholar]
- Grewe O., Nagel F., Kopiez R., Altenmuller E. (2007a). Listening to music as a re-creative process: physiological, psychological, and psychoacoustical correlates of chills and strong emotions. Music Percept. 24, 297–314. 10.1525/mp.2007.24.3.297 [DOI] [Google Scholar]
- Grewe O., Nagel F., Kopiez R., Altenmüller E. (2007b). Emotions over time: synchronicity and development of subjective, physiological, and facial affective reactions to music. Emotion 7, 774–788. 10.1037/1528-3542.7.4.774 [DOI] [PubMed] [Google Scholar]
- Griffiths T. D. (2001). The neural processing of complex sounds. Ann. N. Y. Acad. Sci. 930, 133–142. 10.1111/j.1749-6632.2001.tb05729.x [DOI] [PubMed] [Google Scholar]
- Gruendler T. O., Ullsperger M., Huster R. J. (2011). Event-related potential correlates of performance-monitoring in a lateralized time-estimation task. PLoS ONE 6:e25591. 10.1371/journal.pone.0025591 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Haig A. J., Ruess J. M. (1990). Recovery from vegetative state of six months’ duration associated with Sinemet (levodopa/carbidopa). Arch. Phys. Med. Rehabil. 71, 1081–1083. [PubMed] [Google Scholar]
- Hayashi N., Moriya T., Kinoshita K., Utagawa A., Sakurai A. (2004). “Persistent vegetation means unconsciousness? how to manage vegetation and memory disturbances following severe brain damage,” in Hypothermia for Acute Brain Damage, eds Hayashi N., Bullock M. R., Dietrich D., Maekawa T., Tamura A. (Japan: Springer; ), 327–342. [Google Scholar]
- Hebb D. O. (1949). The Organization of Behavior: A Neuropsychological Theory. New York: Wiley. [Google Scholar]
- Herholz S. C., Zatorre R. J. (2012). Musical training as a framework for brain plasticity: behavior, function, and structure. Neuron 76, 486–502. 10.1016/j.neuron.2012.10.011 [DOI] [PubMed] [Google Scholar]
- Irish M., Cunningham C. J., Walsh J. B., Coakley D., Lawlor B. A., Robertson I. H., et al. (2006). Investigating the enhancing effect of music on autobiographical memory in mild Alzheimer’s disease. Dement. Geriatr. Cogn. Disord. 22, 108–120. 10.1159/000093487 [DOI] [PubMed] [Google Scholar]
- Ito T., Ostry D. J. (2010). Somatosensory contribution to motor learning due to facial skin deformation. J. Neurophysiol. 104, 1230–1238. 10.1152/jn.00199.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ito T., Ostry D. J. (2012). Speech sounds alter facial skin sensation. J. Neurophysiol. 107, 442–447. 10.1152/jn.00029.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ito T., Tiede M., Ostry D. J. (2009). Somatosensory function in speech perception. Proc. Natl. Acad. Sci. U.S.A. 106, 1245–1248. 10.1073/pnas.0810063106 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Janata P. (1995). ERP measures assay the degree of expectancy violation of harmonic contexts in music. J. Cogn. Neurosci. 7, 153–164. 10.1162/jocn.1995.7.2.153 [DOI] [PubMed] [Google Scholar]
- Janata P. (2014). Neural basis of music perception. Handb. Clin. Neurol. 129, 187–205. 10.1016/B978-0-444-62630-1.00011-1 [DOI] [PubMed] [Google Scholar]
- Kleber B., Veit R., Birbaumer N., Gruzelier J., Lotze M. (2010). The brain of opera singers: experience-dependent changes in functional activation. Cereb. Cortex 20, 1144–1152. 10.1093/cercor/bhp177 [DOI] [PubMed] [Google Scholar]
- Kleber B., Zeitouni A. G., Friberg A., Zatorre R. J. (2013). Experience-dependent modulation of feedback integration during singing: role of the right anterior insula. J. Neurosci. 33, 6070–6080. 10.1523/JNEUROSCI.4418-12.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Klimesch W. (1997). EEG-alpha rhythms and memory processes. Int. J. Psychophysiol. 26, 319–340. 10.1016/S0167-8760(97)00773-3 [DOI] [PubMed] [Google Scholar]
- Klimesch W. (1999). EEG alpha and theta oscillations reflect cognitive and memory performance: a review and analysis. Brain Res. Brain Res. Rev. 29, 169–195. 10.1016/S0165-0173(98)00056-3 [DOI] [PubMed] [Google Scholar]
- Koelsch S. (2006). Significance of Broca’s area and ventral premotor cortex for music-syntactic processing. Cortex 42, 518–520. 10.1016/S0010-9452(08)70390-3 [DOI] [PubMed] [Google Scholar]
- Koelsch S. (2012). Brain and Music. Hoboken, NJ: Wiley-Blackwell. [Google Scholar]
- Koelsch S. (2014). Brain correlates of music-evoked emotions. Nat. Rev. Neurosci. 15, 170–180. 10.1038/nrn3666 [DOI] [PubMed] [Google Scholar]
- Koelsch S., Fritz T., Schlaug G. (2008). Amygdala activity can be modulated by unexpected chord functions during music listening. Neuroreport 19, 1815–1819. 10.1097/WNR.0b013e32831a8722 [DOI] [PubMed] [Google Scholar]
- Koelsch S., Fritz T., V Cramon D. Y., Muller K., Friederici A. D. (2006). Investigating emotion with music: an fMRI study. Hum. Brain Mapp. 27, 239–250. 10.1002/hbm.20180 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Koelsch S., Gunter T., Friederici A. D., Schröger E. (2000). Brain indices of music processing: “nonmusicians” are musical. J. Cogn. Neurosci. 12, 520–541. 10.1162/089892900562183 [DOI] [PubMed] [Google Scholar]
- Koelsch S., Gunter T. C., Schröger E., Tervaniemi M., Sammler D., Friederici A. D. (2001). Differentiating ERAN and MMN: an ERP study. Neuroreport 12, 1385–1389. 10.1097/00001756-200105250-00019 [DOI] [PubMed] [Google Scholar]
- Koelsch S., Jentschke S. (2010). Differences in electric brain responses to melodies and chords. J. Cogn. Neurosci. 22, 2251–2262. 10.1162/jocn.2009.21338 [DOI] [PubMed] [Google Scholar]
- Koelsch S., Kasper E., Sammler D., Schulze K., Gunter T., Friederici A. D. (2004). Music, language and meaning: brain signatures of semantic processing. Nat. Neurosci. 7, 302–307. 10.1038/nn1197 [DOI] [PubMed] [Google Scholar]
- Koelsch S., Siebel W. A. (2005). Towards a neural basis of music perception. Trends Cogn. Sci. 9, 578–584. 10.1016/j.tics.2005.10.001 [DOI] [PubMed] [Google Scholar]
- Kotchoubey B. (2005). Apallic syndrome is not apallic: is vegetative state vegetative? Neuropsychol. Rehabil. 15, 333–356. 10.1080/09602010443000416 [DOI] [PubMed] [Google Scholar]
- Kotchoubey B. (2006). Event-related potentials, cognition, and behavior: a biological approach. Neurosci. Biobehav. Rev. 30, 42–65. 10.1016/j.neubiorev.2005.04.002 [DOI] [PubMed] [Google Scholar]
- Kotchoubey B. (2015). “Event-related potentials in disorders of consciousness,” in Clinical Neurophysiology in Disorders of Consciousness, eds Rossetti A. O., Laureys S. (Vienna: Springer; ), 107–123. [Google Scholar]
- Kotchoubey B., Kaiser J., Bostanov V., Lutzenberger W., Birbaumer N. (2009). Recognition of affective prosody in brain-damaged patients and healthy controls: a neurophysiological study using EEG and whole-head MEG. Cogn. Affect. Behav. Neurosci. 9, 153–167. 10.3758/CABN.9.2.153 [DOI] [PubMed] [Google Scholar]
- Kotchoubey B., Lang S., Baales R., Herb E., Maurer P., Mezger G., et al. (2001). Brain potentials in human patients with extremely severe diffuse brain damage. Neurosci. Lett. 301, 37–40. 10.1016/S0304-3940(01)01600-7 [DOI] [PubMed] [Google Scholar]
- Kotchoubey B., Lang S., Herb E., Maurer P., Schmalohr D., Bostanov V., et al. (2003). Stimulus complexity enhances auditory discrimination in patients with extremely severe brain injuries. Neurosci. Lett. 352, 129–132. 10.1016/j.neulet.2003.08.045 [DOI] [PubMed] [Google Scholar]
- Kotchoubey B., Lang S., Mezger G., Schmalohr D., Schneck M., Semmler A., et al. (2005). Information processing in severe disorders of consciousness: vegetative state and minimally conscious state. Clin. Neurophysiol. 116, 2441–2453. 10.1016/j.clinph.2005.03.028 [DOI] [PubMed] [Google Scholar]
- Kotchoubey B., Veser S., Real R., Herbert C., Lang S., Kübler A. (2013). Towards a more precise neurophysiological assessment of cognitive functions in patients with disorders of consciousness. Restor. Neurol. Neurosci. 31, 473–485. 10.3233/RNN-120307 [DOI] [PubMed] [Google Scholar]
- Kotchoubey B., Yu T., Mueller F., Vogel D., Veser S., Lang S. (2014). True or false? Activations of language-related areas in patients with disorders of consciousness. Curr. Pharm. Des. 20, 4239–4247. [PubMed] [Google Scholar]
- Kriegstein K. V., Giraud A. L. (2004). Distinct functional substrates along the right superior temporal sulcus for the processing of voices. Neuroimage 22, 948–955. 10.1016/j.neuroimage.2004.02.020 [DOI] [PubMed] [Google Scholar]
- Krimchansky B. Z., Keren O., Sazbon L., Groswasser Z. (2004). Differential time and related appearance of signs, indicating improvement in the state of consciousness in vegetative state traumatic brain injury (VS-TBI) patients after initiation of dopamine treatment. Brain Inj. 18, 1099–1105. 10.1080/02699050310001646206 [DOI] [PubMed] [Google Scholar]
- Kübler A., Kotchoubey B. (2007). Brain-computer interfaces in the continuum of consciousness. Curr. Opin. Neurol. 20, 643–649. 10.1097/WCO.0b013e3282f14782 [DOI] [PubMed] [Google Scholar]
- Kuck H., Grossbach M., Bangert M., Altenmuller E. (2003). Brain processing of meter and rhythm in music. Electrophysiological evidence of a common network. Ann. N. Y. Acad. Sci. 999, 244–253. 10.1196/annals.1284.035 [DOI] [PubMed] [Google Scholar]
- Lametti D. R., Nasir S. M., Ostry D. J. (2012). Sensory preference in speech production revealed by simultaneous alteration of auditory and somatosensory feedback. J. Neurosci. 32, 9351–9358. 10.1523/JNEUROSCI.0404-12.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lau E. F., Phillips C., Poeppel D. (2008). A cortical network for semantics: (de)constructing the N400. Nat. Rev. Neurosci. 9, 920–933. 10.1038/nrn2532 [DOI] [PubMed] [Google Scholar]
- Leardi S., Pietroletti R., Angeloni G., Necozione S., Ranalletta G., Del Gusto B. (2007). Randomized clinical trial examining the effect of music therapy in stress response to day surgery. Br. J. Surg. 94, 943–947. 10.1002/bjs.5914 [DOI] [PubMed] [Google Scholar]
- Lee B. K., Glass T. A., McAtee M. J., Wand G. S., Bandeen-Roche K., Bolla K. I., et al. (2007). Associations of salivary cortisol with cognitive function in the Baltimore memory study. Arch. Gen. Psychiatry 64, 810–818. 10.1001/archpsyc.64.7.810 [DOI] [PubMed] [Google Scholar]
- Lee Y. C., Lei C. Y., Shih Y. S., Zhang W. C., Wang H. M., Tseng C. L., et al. (2011). HRV response of vegetative state patient with music therapy. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2011, 1701–1704. 10.1109/IEMBS.2011.6090488 [DOI] [PubMed] [Google Scholar]
- Li L., Kang X. G., Qi S., Xu X. X., Xiong L. Z., Zhao G., et al. (2015). Brain response to thermal stimulation predicts outcome of patients with chronic disorders of consciousness. Clin. Neurophysiol. 126, 1539–1547. 10.1016/j.clinph.2014.10.148 [DOI] [PubMed] [Google Scholar]
- Liegeois-Chauvel C., Peretz I., Babai M., Laguitton V., Chauvel P. (1998). Contribution of different cortical areas in the temporal lobes to music processing. Brain 121(Pt 10), 1853–1867. 10.1093/brain/121.10.1853 [DOI] [PubMed] [Google Scholar]
- Linnemann A., Ditzen B., Strahler J., Doerr J. M., Nater U. M. (2015). Music listening as a means of stress reduction in daily life. Psychoneuroendocrinology 60, 82–90. 10.1016/j.psyneuen.2015.06.008 [DOI] [PubMed] [Google Scholar]
- Lombardi F., Taricco M., De Tanti A., Telaro E., Liberati A. (2002). Sensory stimulation of brain-injured individuals in coma or vegetative state: results of a Cochrane systematic review. Clin. Rehabil. 16, 464–472. 10.1191/0269215502cr519oa [DOI] [PubMed] [Google Scholar]
- Lupien S. J., Gaudreau S., Tchiteya B. M., Maheu F., Sharma S., Nair N. P., et al. (1997). Stress-induced declarative memory impairment in healthy elderly subjects: relationship to cortisol reactivity. J. Clin. Endocrinol. Metab. 82, 2070–2075. 10.1210/jcem.82.7.4075 [DOI] [PubMed] [Google Scholar]
- Magee W. L. (2005). Music therapy with patients in low awareness states: approaches to assessment and treatment in multidisciplinary care. Neuropsychol. Rehabil. 15, 522–536. 10.1080/09602010443000461 [DOI] [PubMed] [Google Scholar]
- Magee W. L., O’Kelly J. (2015). Music therapy with disorders of consciousness: current evidence and emergent evidence-based practice. Ann. N. Y. Acad. Sci. 1337, 256–262. 10.1111/nyas.12633 [DOI] [PubMed] [Google Scholar]
- Marchina S., Zhu L. L., Norton A., Zipse L., Wan C. Y., Schlaug G. (2011). Impairment of speech production predicted by lesion load of the left arcuate fasciculus. Stroke 42, 2251–2256. 10.1161/STROKEAHA.110.606103 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Marina D., Klose M., Nordenbo A., Liebach A., Feldt-Rasmussen U. (2015). Early endocrine alterations reflect prolonged stress and relate to 1-year functional outcome in patients with severe brain injury. Eur. J. Endocrinol. 172, 813–822. 10.1530/EJE-14-1152 [DOI] [PubMed] [Google Scholar]
- Marzban M., Shahbazi A., Tondar M., Soleimani M., Bakhshayesh M., Moshkforoush A., et al. (2011). Effect of Mozart music on hippocampal content of BDNF in postnatal rats. Basic Clin. Neurosci. 2, 21–26. [Google Scholar]
- Matsuda W., Komatsu Y., Yanaka K., Matsumura A. (2005). Levodopa treatment for patients in persistent vegetative or minimally conscious states. Neuropsychol. Rehabil. 15, 414–427. 10.1080/09602010443000588 [DOI] [PubMed] [Google Scholar]
- Matsuda W., Matsumura A., Komatsu Y., Yanaka K., Nose T. (2003). Awakenings from persistent vegetative state: report of three cases with parkinsonism and brain stem lesions on MRI. J. Neurol. Neurosurg. Psychiatry 74, 1571–1573. 10.1136/jnnp.74.11.1571 [DOI] [PMC free article] [PubMed] [Google Scholar]
- McEwen B. S., Sapolsky R. M. (1995). Stress and cognitive function. Curr. Opin. Neurobiol. 5, 205–216. 10.1016/0959-4388(95)80028-X [DOI] [PubMed] [Google Scholar]
- Mehta M. A., Riedel W. J. (2006). Dopaminergic enhancement of cognitive function. Curr. Pharm. Des. 12, 2487–2500. 10.2174/138161206777698891 [DOI] [PubMed] [Google Scholar]
- Menke P. (2006). [Basal stimulation of persons in a vegetative state–a case report: back into a more aware life]. Pflege Z. 59, 164–165. [PubMed] [Google Scholar]
- Menon V., Levitin D. J. (2005). The rewards of music listening: response and physiological connectivity of the mesolimbic system. Neuroimage 28, 175–184. 10.1016/j.neuroimage.2005.05.053 [DOI] [PubMed] [Google Scholar]
- Meythaler J. M., Brunner R. C., Johnson A., Novack T. A. (2002). Amantadine to improve neurorecovery in traumatic brain injury-associated diffuse axonal injury: a pilot double-blind randomized trial. J. Head Trauma Rehabil. 17, 300–313. 10.1097/00001199-200208000-00004 [DOI] [PubMed] [Google Scholar]
- Miyake A., Friedman N. P., Emerson M. J., Witzki A. H., Howerter A., Wager T. D. (2000). The unity and diversity of executive functions and their contributions to complex “Frontal Lobe” tasks: a latent variable analysis. Cogn. Psychol. 41, 49–100. 10.1006/cogp.1999.0734 [DOI] [PubMed] [Google Scholar]
- Mizoguchi K., Yuzurihara M., Nagata M., Ishige A., Sasaki H., Tabira T. (2002). Dopamine-receptor stimulation in the prefrontal cortex ameliorates stress-induced rotarod impairment. Pharmacol. Biochem. Behav. 72, 723–728. 10.1016/S0091-3057(02)00747-5 [DOI] [PubMed] [Google Scholar]
- Molinari M., Leggio M. G., De Martin M., Cerasa A., Thaut M. (2003). Neurobiology of rhythmic motor entrainment. Ann. N. Y. Acad. Sci. 999, 313–321. 10.1196/annals.1284.042 [DOI] [PubMed] [Google Scholar]
- Munno I., Damiani S., Scardapane R., Lacedra G., Megna M., Patimo C., et al. (1998). Evaluation of hypothalamic-pituitary-adrenocortical hormones and inflammatory cytokines in patients with persistent vegetative state. Immunopharmacol. Immunotoxicol. 20, 519–529. 10.3109/08923979809031513 [DOI] [PubMed] [Google Scholar]
- Musacchia G., Large E. W., Schroeder C. E. (2014). Thalamocortical mechanisms for integrating musical tone and rhythm. Hear. Res. 308, 50–59. 10.1016/j.heares.2013.09.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Näätänen R. (1995). The mismatch negativity: a powerful tool for cognitive neuroscience. Ear Hear. 16, 6–18. 10.1097/00003446-199502000-00002 [DOI] [PubMed] [Google Scholar]
- Nelken I., Bizley J., Shamma S. A., Wang X. (2014). Auditory cortical processing in real-world listening: the auditory system going real. J. Neurosci. 34, 15135–15138. 10.1523/JNEUROSCI.2989-14.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nieoullon A. (2002). Dopamine and the regulation of cognition and attention. Prog. Neurobiol. 67, 53–83. 10.1016/S0301-0082(02)00011-4 [DOI] [PubMed] [Google Scholar]
- O’Kelly J., James L., Palaniappan R., Taborin J., Fachner J., Magee W. L. (2013). Neurophysiological and behavioral responses to music therapy in vegetative and minimally conscious states. Front. Hum. Neurosci. 7:884. 10.3389/fnhum.2013.00884 [DOI] [PMC free article] [PubMed] [Google Scholar]
- O’Kelly J., Magee W. L. (2013). The complementary role of music therapy in the detection of awareness in disorders of consciousness: an audit of concurrent SMART and MATADOC assessments. Neuropsychol. Rehabil. 23, 287–298. 10.1080/09602011.2012.753395 [DOI] [PubMed] [Google Scholar]
- Ono K., Altmann C. F., Matsuhashi M., Mima T., Fukuyama H. (2015). Neural correlates of perceptual grouping effects in the processing of sound omission by musicians and nonmusicians. Hear. Res. 319, 25–31. 10.1016/j.heares.2014.10.013 [DOI] [PubMed] [Google Scholar]
- Oppl B., Michitsch G., Misof B., Kudlacek S., Donis J., Klaushofer K., et al. (2014). Low bone mineral density and fragility fractures in permanent vegetative state patients. J. Bone Miner. Res. 29, 1096–1100. 10.1002/jbmr.2122 [DOI] [PubMed] [Google Scholar]
- Osterhout L. (1995). Event-related brain potentials elicited by failure to agree. J. Mem. Lang. 34, 739–773. 10.1006/jmla.1995.1033 [DOI] [Google Scholar]
- Pandya D. N. (1995). Anatomy of the auditory cortex. Rev. Neurol. 151, 486–494. [PubMed] [Google Scholar]
- Pantev C., Herholz S. C. (2011). Plasticity of the human auditory cortex related to musical training. Neurosci. Biobehav. Rev. 35, 2140–2154. 10.1016/j.neubiorev.2011.06.010 [DOI] [PubMed] [Google Scholar]
- Pantev C., Ross B., Fujioka T., Trainor L. J., Schulte M., Schulz M. (2003). Music and learning-induced cortical plasticity. Ann. N. Y. Acad. Sci. 999, 438–450. 10.1196/annals.1284.054 [DOI] [PubMed] [Google Scholar]
- Patel A. D. (2003). Language, music, syntax and the brain. Nat. Neurosci. 6, 674–681. 10.1038/nn1082 [DOI] [PubMed] [Google Scholar]
- Patel A. D. (2008). Music, Language, and the Brain. Oxford: Oxford University Press. [Google Scholar]
- Patel A. D. (2011). Why would musical training benefit the neural encoding of speech? The OPERA hypothesis. Front. Psychol. 2:142. 10.3389/fpsyg.2011.00142 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Patel A. D., Gibson E., Ratner J., Besson M., Holcomb P. J. (1998). Processing syntactic relations in language and music: an event-related potential study. J. Cogn. Neurosci. 10, 717–733. 10.1162/089892998563121 [DOI] [PubMed] [Google Scholar]
- Paulraj M. P., Subramaniam K., Yaccob S. B., Adom A. H., Hema C. R. (2015). Auditory evoked potential response and hearing loss: a review. Open Biomed. Eng. J. 9, 17–24. 10.2174/1874120701509010017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pereira C. S., Teixeira J., Figueiredo P., Xavier J., Castro S. L., Brattico E. (2011). Music and emotions in the brain: familiarity matters. PLoS ONE 6:e27241. 10.1371/journal.pone.0027241 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peretz I., Gaudreau D., Bonnel A. M. (1998). Exposure effects on music preference and recognition. Mem. Cogn. 26, 884–902. 10.3758/BF03201171 [DOI] [PubMed] [Google Scholar]
- Peretz I., Zatorre R. J. (2005). Brain organization for music processing. Annu. Rev. Psychol. 56, 89–114. 10.1146/annurev.psych.56.091103.070225 [DOI] [PubMed] [Google Scholar]
- Pickles J. O. (1988). An Introduction to the Physiology of Hearing. London: Academic Press. [Google Scholar]
- Poulin-Charronnat B., Bigand E., Koelsch S. (2006). Processing of musical syntax tonic versus subdominant: an event-related potential study. J. Cogn. Neurosci. 18, 1545–1554. 10.1162/jocn.2006.18.9.1545 [DOI] [PubMed] [Google Scholar]
- Prasanna K. L., Mittal R. S., Gandhi A. (2015). Neuroendocrine dysfunction in acute phase of moderate-to-severe traumatic brain injury: a prospective study. Brain Inj. 29, 336–342. 10.3109/02699052.2014.955882 [DOI] [PubMed] [Google Scholar]
- Radley J. J., Morrison J. H. (2005). Repeated stress and structural plasticity in the brain. Ageing Res. Rev. 4, 271–287. 10.1016/j.arr.2005.03.004 [DOI] [PubMed] [Google Scholar]
- Raglio A., Guizzetti G. B., Bolognesi M., Antonaci D., Granieri E., Baiardi P., et al. (2014). Active music therapy approach in disorders of consciousness: a controlled observational case series. J. Neurol. 261, 2460–2462. 10.1007/s00415-014-7543-0 [DOI] [PubMed] [Google Scholar]
- Rauschecker J. P. (1997). Processing of complex sounds in the auditory cortex of cat, monkey, and man. Acta Otolaryngol. Suppl. 532, 34–38. 10.3109/00016489709126142 [DOI] [PubMed] [Google Scholar]
- Rauschecker J. P. (1999). Neuroscience—Making brain circuits listen. Science 285, 1686–1687. 10.1126/science.285.5434.1686 [DOI] [PubMed] [Google Scholar]
- Rauschecker J. P., Tian B., Pons T., Mishkin M. (1997). Serial and parallel processing in rhesus monkey auditory cortex. J. Comp. Neurol. 382, 89–103. [DOI] [PubMed] [Google Scholar]
- Reybrouck M., Brattico E. (2015). Neuroplasticity beyond sounds: neural adaptations following long-term musical aesthetic experiences. Brain Sci. 5, 69–91. 10.3390/brainsci5010069 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rickard N. S., Toukhsati S. R., Field S. E. (2005). The effect of music on cognitive performance: insight from neurobiological and animal studies. Behav. Cogn. Neurosci. Rev. 4, 235–261. 10.1177/1534582305285869 [DOI] [PubMed] [Google Scholar]
- Rollnik J. D., Altenmüller E. (2014). Music in disorders of consciousness. Front. Neurosci. 8:190. 10.3389/fnins.2014.00190 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Romanski L. M., Tian B., Fritz J., Mishkin M., Goldman-Rakic P. S., Rauschecker J. P. (1999). Dual streams of auditory afferents target multiple domains in the primate prefrontal cortex. Nat. Neurosci. 2, 1131–1136. 10.1038/16056 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sadagopan S., Wang X. (2009). Nonlinear spectrotemporal interactions underlying selectivity for complex sounds in auditory cortex. J. Neurosci. 29, 11192–11202. 10.1523/JNEUROSCI.1286-09.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Salimpoor V. N., Benovoy M., Larcher K., Dagher A., Zatorre R. J. (2011). Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nat. Neurosci. 14, 257–262. 10.1038/nn.2726 [DOI] [PubMed] [Google Scholar]
- Salimpoor V. N., van den Bosch I., Kovacevic N., McIntosh A. R., Dagher A., Zatorre R. J. (2013). Interactions between the nucleus accumbens and auditory cortices predict music reward value. Science 340, 216–219. 10.1126/science.1231059 [DOI] [PubMed] [Google Scholar]
- Sammler D., Grigutsch M., Fritz T., Koelsch S. (2007). Music and emotion: electrophysiological correlates of the processing of pleasant and unpleasant music. Psychophysiology 44, 293–304. 10.1111/j.1469-8986.2007.00497.x [DOI] [PubMed] [Google Scholar]
- Särkämö T., Ripolles P., Vepsalainen H., Autti T., Silvennoinen H. M., Salli E., et al. (2014a). Structural changes induced by daily music listening in the recovering brain after middle cerebral artery stroke: a voxel-based morphometry study. Front. Hum. Neurosci. 8:245. 10.3389/fnhum.2014.00245 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Särkämö T., Tervaniemi M., Laitinen S., Numminen A., Kurki M., Johnson J. K., et al. (2014b). Cognitive, emotional, and social benefits of regular musical activities in early dementia: randomized controlled study. Gerontologist 54, 634–650. 10.1093/geront/gnt100 [DOI] [PubMed] [Google Scholar]
- Särkämö T., Soto D. (2012). Music listening after stroke: beneficial effects and potential neural mechanisms. Ann. N. Y. Acad. Sci. 1252, 266–281. 10.1111/j.1749-6632.2011.06405.x [DOI] [PubMed] [Google Scholar]
- Scheich H., Brechmann A., Brosch M., Budinger E., Ohl F. W. (2007). The cognitive auditory cortex: task-specificity of stimulus representations. Hear. Res. 229, 213–224. 10.1016/j.heares.2007.01.025 [DOI] [PubMed] [Google Scholar]
- Schellenberg E. G., Peretz I., Vieillard S. (2008). Liking for happy- and sad-sounding music: effects of exposure. Cogn. Emot. 22, 218–237. 10.1080/02699930701350753 [DOI] [Google Scholar]
- Schlaug G., Norton A., Marchina S., Zipse L., Wan C. Y. (2010). From singing to speaking: facilitating recovery from nonfluent aphasia. Future Neurol. 5, 657–665. 10.2217/fnl.10.44 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schmithorst V. J. (2005). Separate cortical networks involved in music perception: preliminary functional MRI evidence for modularity of music processing. Neuroimage 25, 444–451. 10.1016/j.neuroimage.2004.12.006 [DOI] [PubMed] [Google Scholar]
- Schön D., Gordon R., Campagne A., Magne C., Astesano C., Anton J. L., et al. (2010). Similar cerebral networks in language, music and song perception. Neuroimage 51, 450–461. 10.1016/j.neuroimage.2010.02.023 [DOI] [PubMed] [Google Scholar]
- Schulz M., Ross B., Pantev C. (2003). Evidence for training-induced crossmodal reorganization of cortical functions in trumpet players. Neuroreport 14, 157–161. [DOI] [PubMed] [Google Scholar]
- Schürmann M., Caetano G., Hlushchuk Y., Jousmaki V., Hari R. (2006). Touch activates human auditory cortex. Neuroimage 30, 1325–1331. 10.1016/j.neuroimage.2005.11.020 [DOI] [PubMed] [Google Scholar]
- Segerstrom S. C., Miller G. E. (2004). Psychological stress and the human immune system: a meta-analytic study of 30 years of inquiry. Psychol. Bull. 130, 601–630. 10.1037/0033-2909.130.4.601 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Seibert P. S., Fee L., Basom J., Zimmerman C. (2000). Music and the brain: the impact of music on an oboist’s fight for recovery. Brain Inj. 14, 295–302. 10.1080/026990500120763 [DOI] [PubMed] [Google Scholar]
- Simonyan K., Horwitz B. (2011). Laryngeal motor cortex and control of speech in humans. Neuroscientist 17, 197–208. 10.1177/1073858410386727 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sojka P., Stålnacke B. M., Björnstig U., Karlsson K. (2006). One-year follow-up of patients with mild traumatic brain injury: occurrence of post-traumatic stress-related symptoms at follow-up and serum levels of cortisol, S-100B and neuron-specific enolase in acute phase. Brain Inj. 20, 613–620. 10.1080/02699050600676982 [DOI] [PubMed] [Google Scholar]
- Soto D., Funes M. J., Guzman-Garcia A., Warbrick T., Rotshtein P., Humphreys G. W. (2009). Pleasant music overcomes the loss of awareness in patients with visual neglect. Proc. Natl. Acad. Sci. U.S.A. 106, 6011–6016. 10.1073/pnas.0811681106 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Steinbeis N., Koelsch S. (2008). Shared neural resources between music and language indicate semantic processing of musical tension-resolution patterns. Cereb. Cortex 18, 1169–1178. 10.1093/cercor/bhm149 [DOI] [PubMed] [Google Scholar]
- Steinhoff N., Heine A. M., Vogl J., Weiss K., Aschraf A., Hajek P., et al. (2015). A pilot study into the effects of music therapy on different areas of the brain of individuals with unresponsive wakefulness syndrome. Front. Neurosci. 9:291. 10.3389/fnins.2015.00291 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stewart L., Henson R., Kampe K., Walsh V., Turner R., Frith U. (2003). Brain changes after learning to read and play music. Neuroimage 20, 71–83. 10.1016/S1053-8119(03)00248-9 [DOI] [PubMed] [Google Scholar]
- Sun J., Chen W. (2015). Music therapy for coma patients: preliminary results. Eur. Rev. Med. Pharmacol. Sci. 19, 1209–1218. [PubMed] [Google Scholar]
- Tervaniemi M., Maury S., Näätänen R. (1994). Neural representations of abstract stimulus features in the human brain as reflected by the mismatch negativity. Neuroreport 5, 844–846. 10.1097/00001756-199403000-00027 [DOI] [PubMed] [Google Scholar]
- Tervaniemi M., Rytkonen M., Schröger E., Ilmoniemi R. J., Näätänen R. (2001). Superior formation of cortical memory traces for melodic patterns in musicians. Learn. Mem. 8, 295–300. 10.1101/lm.39501 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tervaniemi M., Schröger E., Saher M., Näätänen R. (2000). Effects of spectral complexity and sound duration on automatic complex-sound pitch processing in humans—a mismatch negativity study. Neurosci. Lett. 290, 66–70. 10.1016/S0304-3940(00)01290-8 [DOI] [PubMed] [Google Scholar]
- Thaut M. H., Stephan K. M., Wunderlich G., Schicks W., Tellmann L., Herzog H., et al. (2009). Distinct cortico-cerebellar activations in rhythmic auditory motor synchronization. Cortex 45, 44–53. 10.1016/j.cortex.2007.09.009 [DOI] [PubMed] [Google Scholar]
- Tzovara A., Simonin A., Oddo M., Rossetti A. O., De Lucia M. (2015). Neural detection of complex sound sequences in the absence of consciousness. Brain 138, 1160–1166. 10.1093/brain/awv041 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ugoya S. O., Akinyemi R. O. (2010). The place of L-dopa/carbidopa in persistent vegetative state. Clin. Neuropharmacol. 33, 279–284. 10.1097/WNF.0b013e3182011070 [DOI] [PubMed] [Google Scholar]
- Vogel H. P., Kroll M., Fritschka E., Quabbe H. J. (1990). Twenty-four-hour profiles of growth hormone, prolactin and cortisol in the chronic vegetative state. Clin. Endocrinol. (Oxf.) 33, 631–643. 10.1111/j.1365-2265.1990.tb03902.x [DOI] [PubMed] [Google Scholar]
- von Stein A., Sarnthein J. (2000). Different frequencies for different scales of cortical integration: from local gamma to long range alpha/theta synchronization. Int. J. Psychophysiol. 38, 301–313. 10.1016/S0167-8760(00)00172-0 [DOI] [PubMed] [Google Scholar]
- Wang T. (2015). A hypothesis on the biological origins and social evolution of music and dance. Front. Neurosci. 9:30. 10.3389/fnins.2015.00030 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang X., Walker K. M. (2012). Neural mechanisms for the abstraction and use of pitch information in auditory cortex. J. Neurosci. 32, 13339–13342. 10.1523/JNEUROSCI.3814-12.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wu C., Stefanescu R. A., Martel D. T., Shore S. E. (2014). Listening to another sense: somatosensory integration in the auditory system. Cell Tissue Res. 361, 233–250. 10.1007/s00441-014-2074-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yost W. A. (2007). Perceiving sounds in the real world: an introduction to human complex sound perception. Front. Biosci. 12, 3461–3467. 10.2741/2326 [DOI] [PubMed] [Google Scholar]
- Zatorre R. J. (2015). Musical pleasure and reward: mechanisms and dysfunction. Ann. N. Y. Acad. Sci. 1337, 202–211. 10.1111/nyas.12677 [DOI] [PubMed] [Google Scholar]
- Zatorre R. J., Belin P., Penhune V. B. (2002a). Structure and function of auditory cortex: music and speech. Trends Cogn. Sci. 6, 37–46. 10.1016/S1364-6613(00)01816-7 [DOI] [PubMed] [Google Scholar]
- Zatorre R. J., Bouffard M., Ahad P., Belin P. (2002b). Where is ‘where’ in the human auditory cortex? Nat. Neurosci. 5, 905–909. 10.1038/nn904 [DOI] [PubMed] [Google Scholar]
- Zatorre R. J., Gandour J. T. (2008). Neural specializations for speech and pitch: moving beyond the dichotomies. Philos. Trans. R. Soc. Lond. 363, 1087–1104. 10.1098/rstb.2007.2161 [DOI] [PMC free article] [PubMed] [Google Scholar]